Hi, all. Finally migrated from Kubuntu to Debian 12 over the weekend. It's working great, as I figured it would, with one exception: The system isn't turning the monitors off after 10 minutes. It's blanking them, but they're clearly still on.
One monitor is on an AMD graphics card, the other is on the motherboard Intel adapter.
Debian 12 with KDE Plasma running on Wayland with sddm login. It previously worked fine on Kubuntu (which I believe was running X11). It's a fresh Debian install on a different drive; I didn't overwrite the Kubuntu installation.
In the Energy Saving settings, I have "Screen energy saving" checked with a delay of 10 minutes. (I have "suspend session" turned off - one, because I don't want the computer to sleep or suspend, and two, because when I woke it up again, the graphics were garbled and I had to reboot.) As I said, it does blank the screens, but they're still clearly on. I want them to go into power save mode.
I've tried running dpkg-reconfigure and selecting sddm, no change. In KDE's background services, I tried turning off KScreen 2, but that didn't help (though I'm not sure if I rebooted after turning it off, now that I think about it).
I found advice somewhere that suggested deleting .config/powermanagementprofilesrc and rebooting; I did that, no change.
I did notice yesterday that the monitors had shut off...after a very long time of being idle. I'm not sure how long, but more than overnight, for certain.
Any advice or suggestions? Unfortunately, searching is difficult, because I get a lot of results where the screen blanks when it shouldn't. I haven't found much for this problem.
I used the same installer on my laptop to do the same migration (also with KDE Plasma and sddm) and it works fine there.
I've had this problem for years. I contributed to one of the existing bug reports for kscreen on this.
If i dont login to kde (sddm login screen), the screens will shut off normally... But once i login, the problem starts. So i concluded the problem is with kscreen2. I even tested by killijg kscreen2 and the problem goes away
Sorry, i dont have all the links available, but the root cause of the problem is apparently with the amdgpu driver
Kscreen was supposed to implement some kind of workaround, but i lost track of how that was going.
I'm 43, almost 44, years old and went through a bought of alcoholism during the early part of the pandemic. I went through treatment and have been fine since. However, I can't help but feel that all the news in the last few months is just the worst. Between the AI bullshit, the wars, the effects of capitalism, and the political situation in general it's just the worst. Is it just me or have other folks noticed the same trend?
Edit: I should have also mentioned the enshitification of everything tech related.
Edit 2: Thanks for all the thoughtful replies. For some more context, yes I'm American and live in a state that's about to ban the wearing of masks in public. I haven't had a drink in over year and have been in therapy for 3 years. I don't watch any news sources and rarely read media websites. But yet, that information seeps into my life somehow. I donate blood, I make charitable donations, and try to live a good life. I have 2 amazing kids and a great wife. It's just hard to not end up in a doomer mindset at times. A Bitcoin company bought a power plant up here that has an existing lease to use a lake as cooling water, and it's heated up the lake to the point that it's killing fish.
Remember that there are biases at play here. There's the negativity bias (we worry more about bad things happening, than we are uplifted about good things happening), and the media bias to report the worst. As Pinker wrote:
News is about things that happen, not things that don't happen. We never see a journalist saying to the camera, "I'm reporting live from a country where a war has not broken out". (...) As long as bad things have not vanished from the face of the earth, there will always be enough incidents to fill the news, especially when billion of smartphones turn most of the world's population into crime reporters and war correspondents.
Combine the two, and you will naturally have all media preferentially report (and often blow out of proportion for the views and clicks) bad news over good news.
Edit: typo and grammar
I see you never got a reply to your question. I am obviously biased in favour of Pinker, but my perception is that "liberal hack" (and other epithets) is a mindless insult that people throw at him when they don't like to uplifting message that he's communicating, but can't find anything logically or factually wrong with his arguments or his presentation of data.
The closest I saw someone trying to have a legitimate case of showing Pinker misrepresenting reality, was the criticism of this passage (also from "Enlightenment Now"):
What proportion of pairs of ethnic neighbors coexist without violence? The answer is, most of them: 95 percent of the neighbors in the former Soviet Union, 99 percent of those in Africa.
(i.e. only 1% is at war)
Critics pointed out that, at the time of Pinker's writing, the number of countries in Africa at war was X, and X divided by the number of all countries in Africa is much greater than the 1%, so clearly Pinker is lying. But firstly, the passage talks about ethnic neighbours, not countries, of which there is much more in Africa and the former Soviet Union, and secondly, there is almost always more neighbours than there is countries in any region. For example in Australia, there are 5 states, but 6 borders (pairs of states), so if Queensland went to war with New South Wales, 60% of the states would be at peace, but 83% of pairs of neighbours would be at peace.
Edit: grammar
yt-dlp. Too many options to remember and look up every time, but all useful and missing from GUIs when you just want to dowload audio or 'good enough' quality video in batches without re-encoding.
While nmtui is perfectly fine for the CLI-uninitiated, I sometimes wonder why the nm-connection-editor window doesn't provide the same level of functionality.
Too many options to remember and look up every time
This is a good use case for shell aliases. If you can identify a few of your use cases, you can give each bundle of options its own command.
(Windows only warning, unless someone wants to add Linux support)
I didn't really search around for GUIs way back, but ended up making a basic GUI because I wanted to learn programming.
With just having options as checkboxes for YouTube-dl. It has served me well all these years.
It was literally the thing I made while learning programming so the code is pretty janky when I look back at it though...
There’s a firefox extension that generates the cli command for whatever video you’re on. Let’s you check boxes for the format, sponsorblock, etc and then copies it to your clipboard.
Just search the addon store for yt-dlp and it should show up
Btw, here's my config file.
```
-o "%(title)s (%(uploader_id)s).%(ext)s"
-P ~/Videos
-P "temp:/tmp/yt-dlp/"
-f 271+ba[language=en][ext=m4a]/308+ba[language=en][ext=m4a]/137+ba[language=en][ext=m4a]/299+ba[language=en][ext=m4a]/231+ba[language=en][ext=m4a]/http_mp3_128/271+140/308+140/137+140/299+140/231+140
--download-archive ~/.config/yt-dlp/dl-archive
--no-playlist
--write-sub
--no-mtime
--compat-options no-live-chat
A modern GUI App for downloading Videos and Audios from hundreds of sites - aandrew-me/ytDownloaderGitHub
We use a doc where we can't just manage the config.
As well, there are a host of tools that all purport to manage your wireguard for you (generally using consul) that may be better. Assuming your goal is "GUI because I want to X" for management values of reason X, one of those manager apps may get you there without you needing to care about the GUI.
Just of the too of my head discovered today.
Not a GUI as obe exists. But a more configurable one as it is crap for visually impaired.
Rpi-imager gui dose not take theme indications for font soze etc. Worse it has no configuration to change such thing.
Making it pretty much unsuable for anyone with poor vision.
Also it varies for each visually impaired indevidual. But dark mode is essential for some of ua.
So if your looking for small projects. Youd at least make me happy;)
Appears to work as well as it does on windows. I guess the only downside is learning powershell if you have no previous experience with it.
Pandoc, for sure. I love its versatility, it's made it super easy for me to do most of my writing in markdown — and a lot of MD editors have it built-in as an export feature.
But I use it too rarely to know the CLI commands by heart, and sometimes it would just be super helpful to open a GUI and batch convert (and/or collate) a bunch of files to a new format.
Tell you what, throw Imagemagick and maybe a light OCR backend into the package as a Swiss Army Knife for document management, I'd probably be happy.
I'd love to have archivemount or a similar tool integrated in a file manager
I'd also love to have some sort of full featured gui software to install and manage custom roms in phones, allowing to do everything, from unlocking bootloaders to downloading and flashing/upgrading roms. For the tasks that require manual steps, it could offer illustrated steps, with a community driven database of phone models.
I'm not them, but sorting by columns, filtering, searching with highlights would be useful. Also, specifying the columns you wish to see.
After writing it down this sounds just plain spreadsheet operations, so the real value of such a tool would be to do all the above at the same time as watching changes.
There's also other things that would be useful. Like a feature to select multiple directories for watching. Live output to file in original format. Maybe also JSON for when you would use it from code, but that's maybe not that useful because then why not just use the API directly..
Perhaps some patterns for which ones to send as an audible system notification.
Persepolis Download Manager, is a libre and open-source download manager which is supported on GNU/Linux , BSDs , MacOS and WindowsPersepolis Download Manager
pactl set-default-sink $(pactl list short sinks |awk '{print $2}' |tofi $tofi_args)
--help
output. Maybe there's some possibility to GUIfy that sort of thing?https://github.com/KDE/systemdgenie
https://github.com/hardpixel/systemd-manager
Systemd managment utility. Contribute to KDE/systemdgenie development by creating an account on GitHub.GitHub
https://github.com/KDE/systemdgenie
https://github.com/hardpixel/systemd-manager
Systemd managment utility. Contribute to KDE/systemdgenie development by creating an account on GitHub.GitHub
Does set-default-sink change an already current stream? Or do you need move-sink-input.
I've looked at the manpages but was a bit overwhelmed and didn't try to make my own script. Your solution gives me motivation to do so. I also use sway and pipewire. Though I use fuzzel for my launcher.
A systemd service manager written in Rust with the GTK-rs wrapper and direct integration with dbus - GuillaumeGomez/systemd-managerGitHub
My laptop, desktop pc, and VMs are running Linux. All of them (except the laptop) are remotely accessible over the local network via Moonlight game stream using Sunshine as the hosting software.
I use USB/IP to send things like a Dualsense controller, or USB headset over the network, as well as my yubikey if I need to log into something with FIDO2 authentication remotely. (I haven't tested my yubikey over usb/ip yet but I will eventually) I've also managed to use my racing wheel this way but if it lags it hurts the game badly.
Webcam / headset / USB storage devices / game controllers work just fine so far.
The whole CLI. Linux should automatically generate default GUIs from manpages and code, to be developed further by the crowd of users on the desktop. It's pointless to handcraft both interfaces one app at a time.
I like Linux Mint (compared to Ubuntu, Debian, and Windows) because usually right-clicking takes me closer to the solution I'm looking for, but it doesn't allow me to dig deep enough. It should be discoverable all the way from the desktop to what makes it tick. Think of Smalltalk by Alan Kay in Xerox PARC in the 1970s, or what it would be now had it been mainstream all this time. #discoverability #explorability
A lot of GUIs have less options available than their CLI equivalents. Moreover GUIs change more often, requiring you to relearn the actions to get the expected result
Shells can remember the commands you used, commands are also way easier to write down on paper than a list of actions to do on a GUI
And using man or --help is not going somewhere to know the options, you stay in the shell
If you want to know all the features of a tool, reading the manual is also easier than browsing all the GUI
The CLI lets the user automate tasks, giving them more control over their workflow
I love programs like freecad despite the really hard/unintuitive gui.
95% of all the modelling i need to do (as an amateur) can be done easily in a python script.
The finishing touches like adding filets and chamfers are the annoying part were gui is easier, due to the way edges are referenced.
Likewise at work, we have to produce a lot of regular reports in excel.
All done via python / sql.
Dont know if it's illegitimate otherwise 😉
But my user story is like this:
I want to preserve and archive information I used because it's a reflection of the things I did, learned and studied throughout life.
Then my use case are:
My current workflow:
I would like to automate the last 3 steps of my workflow.
A single, decent, maintained one for LVM.
Redhat had a couple of goes at this and they suck ass big time and rely on KDE (so no good for any other DE / WM). I'm not sure anything really works, so I'll say: none exist.
Anything that needs to be configured with YAML, and Kubernetes in particular.
I mean I get the whole Infrastructure as Code hype (although I have never witnessed or heard of a situation where an entire cluster needed to be revived from scratch), but it should be very possible to make a gui that writes the YAML for you.
I don't want to memorize every possible setting and what it does and if someone makes a typo in the config (or in the white space, as it's YAML) everything is borked.
Call me old-fashioned but the graphical ui of something like octopus deploy was a thousand times more user friendly imho.
I think it’s easy to make a generic YAML editor that all you need to do is to pass a “definitions” file that says all the possible options to show as a drop down or toggle etc.
That would be useful for many projects.
That UI is called VSCode
At the top of your .yaml
file, you can set a JSON Schema. Example:
# yaml-language-server: $schema=https://json.schemastore.org/prometheus.json
scrape_configs:
- job_name: caddy
static_configs:
- targets:
- caddy:2019
Automatically exported from code.google.com/p/winff - WinFF/winffGitHub
Not sure, search on "screenshot lazy load Fireshot" or "screenshot lazy load Linkwarden" does not turn up anything conclusive.
Do you have an example?
https://discourse.gnome.org/t/towards-a-better-way-to-hack-and-test-your-system-components/21075
This one doesn't actually seem to load new network requests, but the way the scrolling works seems to break any other screenshot application I've tried.
Introduction Hey everyone, I recently joined Codethink and I am contributing to the improvements being made to GNOME OS, through the Sovereign Tech Fund (STF), to make it easier and more useful in day-to-day development and testing tasks, among othe…GNOME Discourse
You're about to take your first steps in the wonderful world of Linux, but you're overwhelmed by the amount of choices? Welcome to this (I hope) very simple guide 😀
The aim of this guide is to provide simple, clear information to ease your transition as a beginner. This is not a be-all-end-all guide nor an advanced guide. Because there is a lot of info and explanations everywhere, I will often (over-)simplify so as to keep the information accessible and digestible. Please refrain from asking to add your favorite distro/DE in the comments, I feel there is too much choice already ;)
Nowadays most relatively recent hardware works perfectly fine on Linux, but there are some edge cases still. If you don't use niche hardware and your wifi card is supported, chances are you're golden. Please note that nVidia is a bad faith player in the Linux world, so if you have a GeForce GPU, expect some trouble.
If some proprietary app is essential to your workflow and is irreplaceable, consider running it in a VM, keeping a Windows partition for it or try and run it through Wine (this is advanced stuff though).
Things work differently, and this is normal. You will probably struggle at the beginning while adjusting to a new paradigm. You may have to troubleshoot some things. You may break some things in the process. You will probably get frustrated at some point or another. It's okay. You're learning something new, and it can be hard to shed old habits forged by years on another system.
Arch Wiki is one of the greatest knowledge bases about Linux. Despite being heavily tied to Arch, most of its content is readily usable to troubleshoot most modern distros, as the building blocks (Kernel, systemd, core system apps, XOrg/Wayland, your DE of choice etc.) are the same. Most distros also maintain their own knowledge base.
Linux, in the strictest definition, is the kernel, ie. the core component that, among other things, orchestrates and handles all interactions between hardware and software, of a large family of operating systems that, by metonymy, are called "Linux". In general understanding, Linux is any one of these operating systems, called distros.
A distro, short for "Software Distribution", is a cohesive ensemble of software, providing a full operating system, maintained by a single team. Generally, all of them tend to provide almost the same software and work in a very similar way, but there are major philosophical differences that may influence your choice.
As said above, there are a lot of philosophical differences between distros that lead to practical differences. There are a lot of very different ways the same software can be distributed.
- "Point Release" (OpenSUSE Leap) vs. "Rolling Release" (OpenSUSE Tumbleweed): Point release distros are like traditional software. They have numbered releases, and between each one no feature updates take place, only security updates and bug fixes. Rolling Release distros package and distribute software as soon as it's available upstream (the software developer's repos), meaning that there are no versions and no specific schedule.
- "Stable" (Debian Stable) vs. "Bleeding edge" (Arch): Stable distros are generally point release, and focus on fixing bugs and security flaws at the expense of new features. Each version goes through a lenghty period of feature freeze, testing and bug fixing before release. Stability here not only means trouble-free operation, but more importantly consistent behavior over time. Things won't evolve, but things won't break. At least until the next release. Bleeding edge distros, which often follow the rolling release model (there are outliers like Fedora which are mostly bleeding edge yet have point releases), on the other hand, are permanently evolving. By constantly pushing the latest version of each software package, new features, new bugs, bug fixes, security updates and sometimes breaking changes are released continuously. Note that this is not a binary, there is a very large continuum between the stablest and the most bleeding edge distro.
- "Community" (Fedora) vs. "Commercial" (RHEL): Despite the name, Community distros are not only maintained by volunteers, but can also be developed by some company's employees and can be sponsored by commercial entities. However, the main difference with Commercial distros is that they're not a product destined to be sold. Commercial distros like Red Hat's RHEL, SuSE Linux Enterprise or Ubuntu Pro are (supposed to be) fully maintained by their company's employees and target businesses with paid support, maintenance, fixes, deployment, training etc.
- "x package manager" vs. "y package manager", "x package format" vs. "y package format": It doesn't matter. Seriously. apt
, dnf
or pacman
, to name a few, all have the exact same purpose: install and update software on your system and manage dependencies.
- "general purpose" (Linux Mint) vs. "niche" (Kali Linux): General purpose distros are just that: distros that can do pretty much anything. Some are truly general purpose (like Debian), and have no bias towards any potential use, be it for a server, a desktop/laptop PC, some IOT or embedded devices, containers etc., some have various flavors depending on intended use (like Fedora Workstation for desktops and Fedora Server for, you guessed it, servers) but are still considered general purpose. They aim for maximum hardware compatibility and broad use cases. At the opposite end, niche distros are created for very specific and unique use cases, like pentesting (Kali), gaming (Nobara), music production (AV Linux) etc. They tend to have a lot of specific tools preinstalled, nonstandard defaults or modified kernels that may or may not work properly outside of their inteded use case.
- "team" (Any major distro) vs. "single maintainer" (Nobara): Pretty self explanatory. Some distros are maintained by a single person or a very small group of people. These distros do not usually last very long.
- "traditional" (Fedora Workstation) vs. "atomic" (Fedora Silverblue): In traditional distros, everything comes from a package. Every single component is individually installable, upgradeable, and deletable. Updating a package means deleting its previous version and replacing it with a new one. A power failure during an update lead to a partial upgrade and can make a system unbootable. Maybe a new package was bad and breaks something. Almost nothing prevents an unsuspecting user from destroying a core component. To mitigate risks and ensure a coherent system at each boot, atomic (also called transactional or immutable) distros, pioneered by Fedora Silverblue and Valve's SteamOS, were born. Like mobile phone OSes, the base system is a single image, that gets installed, alongside the current running version and without modifying it, and becomes active at the next reboot. As updates are isolated from one another, if the new version doesn't work the user can easily revert to a previous, functional version. Users are expected to install Flatpaks or use Distrobox, as installing (layering) packages is not as straightforward as with standard distros.
- "OG" (Debian) vs. "derivative" (Ubuntu): Original distros are directly downstream of their components' source code repositories, and do most of the heavy lifting. Because of the tremendous amount of work it represents, only a few distros like Debian, Arch, Slackware or Fedora have the history, massive community and sometimes corporate financial backing to do this. Other distros reuse most packages from those original distros and add, replace or modify some of them for differenciation. For example, Debian is the parent of almost all deb-based distros like Ubuntu, which itself is the parent of distros like Mint or Pop!_OS.
All distros provide, install and maintain, among other things, the following components:
- Boot and core system components (these are generally out-of-scope for beginners, unless you need to fix something, but you should at least know they exist):
- A boot manager (GRUB, systemd_init, etc.): Boots the computer after the motherboard POSTs, lets you choose what to start
- An init system (systemd, etc.): Starts everything needed to run the computer, including the kernel
- A kernel (Linux): Has control over everything, main interface for software to discuss with hardware
- Command-line environment, to interact with he computer in text mode:
- A shell (bash, zsh, fish etc.): The main interface for command-line stuff
- Command-line tools (GNU, etc.): Standard suite of command-line tools + default tools chosen by the distro maintainers
- User-installable command-line tools and shells
- Graphical stack for desktop/laptop computers:
- Display servers (X11, Wayland compositors): Handle drawing stuff on screens
- A Desktop environment (Plasma, Gnome, XFCE etc.): The main graphical interface you'll interact with everyday.
- User-facing applications (browsers, text processors, drawing software etc.): Some are generally installed by default and/or are part of a desktop environment's suite of software, most are user-installable.
- A package manager (apt, dnf, pacman, yast etc.): Installs, deletes, updates and manages dependencies of all software installed on the machine.
As a new user, this is basically the only thing you should concern yourself about: choosing a first Desktop environment. After all, it will be your main interface for the weeks/years to come. It's almost as important as choosing your first distro. These are a few common choices that cater to different tastes:
- Gnome: Full featured yet very minimalist, Gnome is a great DE that eschews the traditional Desktop metaphor. Like MacOS, out of the box, it provides its strongly opinionated developers' vision of a user experience. Fortunately, unlike MacOS, there are thousands of extensions to tweak and extend the looks and behaviour of the DE. Dash-to-dock or Dash-to-panel are great if you want a more MacOS-like or Windows-like experience, Blur My Shell is great if you love blurry transparent things, Appindicator is a must, and everything else is up to you. Gnome's development cycle is highly regular and all core components and apps follow the same release schedule, which explains why a lot of distros choose it as their default DE.
- KDE Plasma: Full featured and maximalist, Plasma does not cater to a single design philosophy, is very flexible and can be tweaked almost ad infinitum. This may be an advantage for people who like to spend hours making the perfect environment, or a disadvantage as the possibilities can be overwhelming, and the added complexity may compromise stability, bugginess or completeness. There is not yet a single development cycle for core components and apps, which makes it a bit more difficult for distro maintainers and explains why there are so few distros with Plasma as the flagship DE. The KDE team is however evolving towards a more regular update cycle.
- Cinnamon: Forked from Gnome 3 by the Linux Mint team who disliked the extreme change of user experience it introduced, Cinammon provides a very traditional, "windows-like", desktop-metaphor experience in a more modern software stack than the older DEs it takes inspiration from. Cinnamon still keeps a lot in common with Gnome by being simple and easy to use, yet heavily modifiable with themes, applets and extensions.
- Lightweight DEs for old or underpowered machines: The likes of XFCE, LXDE, LXQt are great if you want to ressurect an old machine, but lack the bells and whistles of the aforementioned DEs. If your machine is super old, extremely underpowered and has less than a few Gb of RAM, don't expect miracles though. A single browser tab can easily dwarf the RAM usage and processing power of your entire system.
As for which one you should choose, this is entirely up to you, and depends on your preferences. FYI, you are not married to your distro's default desktop environment. It's just what comes preinstalled. You can install alternative DEs on any distro, no need to reinstall and/or distro-hop.
Forget what you're used to do on Windows of MacOS: searching for your software in a seach engine, finding a big "Download" button on a random website and running an installer with administator privileges. Your package manager not only keeps you system up to date, but also lets you install any software that's available in your distro's repositories. You don't even need to know the command line, Gnome's Software or Plasma's Discover are nice graphical "App Stores" that let you find and install new software.
Flatpak are a great and more recent recent alternative to distro packages that's gaining a lot of traction, and is increasingly integrated by default to the aforementioned App Stores. It's basically a "universal" package manager system thet sits next to your system, that lets software developers directly distribute their own apps instead of offloading the packaging and distribution to distro maintainers.
As discussed before, there is a metric fuckload (or 1.112 imperial fucktons) of distros out there. I advise you to keep it as mainstream as possible for your first steps. A distro with a large user base, backed by a decently large community of maintainers and contributors and aimed at being as fuss-free as possible is always better than a one-person effort tailored to a specific use-case. Choose a distro that implements well the DE of your choice.
The following are great distros for beginners as well as more advanced users who just want to have a system that needs almost no configuration out of the box, just works and stays out of the way. Always read the installation documentation thoroughly before attempting anything, and follow any post-install requirements (for example, installing restricted-licence drivers on Fedora).
- Fedora Workstation: Clean, sensible, modern and very up to date and should work out of the box for most hardware. Despite being sponsored by Red Hat (who are getting a lot of justified hate for moving RHEL away from open-source), this is a great community distro for both beginners and very advanced users (including the Linus Torvalds). Fedora is the flagship distro for the Gnome Desktop Environment, but also has a fantastic Plasma version. Keywords: Point Release, close to Bleeding Edge, Community, dnf/rpm, large maintainer team, traditional, original.
- Linux Mint: Mint is an Ubuntu (or Debian for the LMDE variant) derivative for beginners and advanced users alike, that keeps Ubuntu's hardware support and ease of use while reverting its shenanigans and is Cinammon's flagship distro. Its main goal is to be a "just works" distro. Keywords: Point Release, halfway between Stable and Bleeding Edge, Community, apt/deb, smallish maintainer team but lots of contributors, traditional, derivative (Ubuntu or Debian).
- Pop!_OS: Backed by hardware Linux vendor System76, this is another Ubuntu derivative that removes Snaps in favor or Flatpaks. Its heavily modified Gnome DE looks and feels nice. In a few months/years, it will be the flagship distro for the -promising but still in development- Cosmic DE. Keywords: Point Release, halfway between Stable and Bleeding Edge, commercially-backed Community, apt/deb, employee's maintainer team, traditional, derivative (Ubuntu).
- If you want something (advertised as) zero-maintenance, why not go the Atomic way? They are still very new and there isn't a lot of support yet because they do things very differently than regular distros, but if they wort OOTB on your system, they should work reliably forever. Sensible choices are uBlue's Aurora (Plasma), Bluefin (Gnome) or Bazzite (gaming-ready), which are basically identical to Fedora's atomic variants but include (among other things) restricted-licence codecs and QOL improvements by default, or OpenSUSE's Aeon (Gnome). Keywords: Point Release, Bleeding Edge, Community, rpm-ostree, large maintainer team, Atomic, sub-project (Fedora/OpenSUSE).
These are amongst the very best but should not be installed as your first distro, unless you like extremely steep learning curves and being overwhelmed.
- Debian Stable: as one of the oldest, still maintained distros and the granddaddy of probably half of the distros out there, Debian is built like a tank. A very stringent policy of focusing on bug and security fixes over new features makes Debian extremely stable and predictable, but it can also feel quite outdated. Still a rock-solid experience, with a lot to tinker with despite very sensible defaults. It is an incredible learning tool and is as "Standard Linux" as can be. Debian almost made the cut to "beginner" distros because of its incredible reliability and massive amount of documentation available, but it might be a bit too involved for an absolute beginner to configure to perfection. Keywords: Point Release, Stable as fuck, Community, apt/deb, large maintainer team, traditional, original.
- Arch: The opposite of Debian in philosophy, packages often come to Arch almost as soon as the source code is released. Expect a lot of manual installation and configuration, daily updates, and regularly fixing stuff. An incredible learning tool too, that will make you intimate with the inner workings of Linux. The "Arch btw" meme of having to perform every single install step by hand has taken a hit since Arch has had a basic but functional installer for a few years now, which is honestly a good thing. I work in sofware. A software engineer who does every single tedious task manually instead of automating it is a shit software engineer. A software engineer who prides themself from doing every single tedious task manually should seriously reconsider their career choices. Arch's other main appeal is the Arch User Repository or AUR, a massive collection of user-created, automated install scripts for pretty much anything. Keywords: Rolling Release, Bleeding-edge, Community, pacman/pkg, large maintainer team, traditional, original.
apt
, baking ads and nags into major software or only delivering critical security patches to Pro customers. Fortunately, there are some great derivatives like Mint or Pop!_OS cited above that work equally well but revert some of the most controversial decisions made by Canonical.
You've done your research, you're almost ready to take the plunge, you even read a lot of stuff on this very community or on the other website that starts with a "R", but people seem very passionately for or against stuff. What should you do?
Yes, eventually. To be honest, nowadays a lot of things can be configured on the fly graphically, through your DE's settings. But sometimes, it's much more efficient to work on the command line, and sometimes it's the only way to fix something. It's not that difficult, and you can be reasonably productive by understanding just about a dozen very simple commands.
Noooo!. Contrary to Windows and MacOS which only work correctly on period-correct computers, Linux runs perfectly well on any hardware from the last 20 to 30 years. You will not gain performance by using an old distro, but you will gain hundreds of critical security flaws that have been since corrected. If you need to squeeze performance out of an old computer, use a lightweight graphical environment or repurpose it as a headless home server. If it's possible, one of the best ways to breathe new life into an old machine is to add some RAM, as even lightweight modern sofware will struggle with less than a few Gb.
No. In short, systemd is fine and all major distros have switched to systemd years ago. Even the extremely cautious people behind Debian have used systemd as default since 2015. Not wanting to use systemd is a niche more rooted in philosophical and ideological rather than practical or technical reasons, and leads to much deeper issues than you should concern yourself with as a beginner.
Yes and No, but mostly No. First off, most distros install both Wayland and XOrg by default, so if one is not satisfying to you, try the other. Remember in the preamble when I said nVidia was a bad actor? Well, most of people's complaints about Wayland are because of nVidia and their shitty drivers, so GTX/RTX users should stay on XOrg for now. But like it or not, XOrg is dead and unmaintained, and Wayland is the present and future. XOrg did too many things, carried too many features from the 80's and 90's and its codebase is a barely maintainable mess. X11 was born in a time when mainframes did most of the heavy lifting and windows were forwarded over a local network to dumb clients. X11 predates the Internet and has basically no security model. Wayland solves that by being a much simpler display protocol with a much smaller feature set adapted to modern computing and security. The only downside is that some very specific functionalities based on decades of X11 hacking and absolute lack of security can be lost.
No. General purpose distros are perfectly fine for gaming. You can install Steam, Lutris, Heroic, Itch etc. and use Proton just fine on almost anything. Even Debian. In short, yes, you can game on Linux, there are great tutorials on the internet.
Not really. Flatpaks are great, and more and more developers package their apps directly in Flatpak format. As a rule of thumb, for user facing applications, if your app store gives you the choice between Flatpak and your native package manager version, choose the most recent stable version and/or the one packaged by the developer themselves (which should often be the Flatpak anyway). Snaps however are kinda bad. They are a Canonical/Ubuntu thing, so as long as you avoid Ubuntu, its spins and its derivatives that still include Snaps, you should be fine. They tend to take a lot longer to startup than regular apps or Flatpaks, the snap store is proprietary, centralized and Canonical controls every part of it. Also, Canonical is very aggressive in pushing snaps to their users, even forcing them even when they want to install an apt
package. If you don't care, have fun.
No. Generally, most software is intallable from your distro's package manager and/or Flatpak. But sometimes, your distro doesn't package this program you need, or an inconsiderate developer only distributes a random .deb on their Github release page. Enter Distrobox. It is a very simple, easy to use command line tool that automates the creation of other Linux distros containers using Docker or Podman (basically, tiny, semi-independant Linuxes that live inside your regular Linux), and lets you "export" programs installed inside these containers to you main system so you can run them as easily and with almost the same performance as native programs. Some atomic distros like uBlue's variants even include it by default. That .deb we've talked about before? Spin a Debian container and dpkg install
the shit out of it. Absolutely need the AUR? Spin an Arch container and go to town.
Thanks to everyone who helped improve this guide: @GravitySpoiled@lemmy.ml, @tkn@startrek.website, @throwaway2@lemmy.today, @cerement@slrpnk.net, @kzhe@lemm.ee, @freijon@feddit.ch, @aarroyoc@lemuria.es, @SexualPolytope@lemmy.sdf.org, @Plopp@lemmy.world, @bsergay@discuss.online ...and many others who chimed in in the comments ❤
Link to version 1: https://lemm.ee/post/15895051
I'm well aware that both elementaryOS and its Pantheon DE were innovative and made major strides for user-friendliness a couple of years back. Hence, they rightfully earned a spot among the newbie-friendly distros. However, I might be wrong, but it feels as if they haven't been able to keep momentum. And therefore lost their significance.
If you think I'm wrong, please feel free to correct me; I would love to be educated on how elementaryOS has kept relevance (if they actually have).
Very good write up overall. I'll start by admitting that I didn't read all of it. But from the parts that I did read, I have some small comments. I think that Debian Stable is a great beginner distro, since it's essentially unbreakable. With something like KDE Plasma as a DE, it's perfect for noobs. EndeavourOS is another great one. Maybe not for beginners, but for semi-advanced usecases.
Also, I'm not sure about suggesting the Atomic family of dostros to newcomers. It might be my relative unfamiliarity with them, but I don't think immutable distro are a good place to start. They're definitely great to try when you're familiar with Linux, but they're still kind of fringe. It's hard to get support using those.
but I don’t think immutable distro are a good place to start.
FWIW, the first distro I used and subsequently daily-drove^[1]^ was Fedora Silverblue over two years ago. The try-hard in me immediately started off (or at least tried) applying the hardening outlined in Madaidan's article. After banging my head for a week, I started actually using the system and it has been a very smooth ride ever since. The uBlue images are straight up better when it comes to the OOTB-experience without even mentioning the associated 'managed'^[2]^ aspect that comes with it. Therefore, I believe that they're perfectly suitable. They're not for everyone, but no distro is anyways.
I think immutable distros could be great for newbies, but I'm just thinking they're still so new that if you go online to look for Linux advice or help, most things you'll find are very much not for immutables and I doubt a true newbie understands what's what.
That's also a reason I'd recommend something like Debian (although I've actually never even used it myself) because there's so much compatible info out there. I would recommend OpenSUSE, even Tumbleweed, but there's just not as much help or there to find as there is for Debian. But even with that said, OpenSUSEs snapshots and the way they're configured out of the box is an absolute godsend and game changer for newcomers.
I think immutable distros could be great for newbies, but I’m just thinking they’re still so new that if you go online to look for Linux advice or help, most things you’ll find are very much not for immutables and I doubt a true newbie understands what’s what.
I definitely agree. But, I think it's sufficient to communicate to new uBlue users that they should check uBlue's own documentation first. And, if they didn't find the answer there, that they should ask on discourse or on Discord.
I only addressed this for new uBlue users as I don't think other immutable distros are sufficiently newbie-friendly yet.
I'm doing an experiment right now. I'm giving my previous laptop to my dad to replace his very old, very close to death MacBook Air. I've installed Bluefin, rebased to the Stable branch and keeping everything else stock.
We'll see how it goes 😁
Will report 😁
The only thing that scares me a bit is that not only he's a newbie, he also actively refuses to understand how computers work ^^;
Totally agree with that. I'm just wondering how many people read things like welcome screens etc where such info usually is presented.
They should have all necessary software installed and configured for people to easily get to things like those you mentioned. And have a clear help section in the OS, preferably with sections for different large topics and what not, that links to forum sections or similar. Steer them right before they even hit the web sort of.
Wonderfully laid out. Couldn't agree more.
I'm also curious to find out how effective welcome screens are.
I suppose the most effective would be if the user is told how to act whenever they're about to commit a 'mistake'; after which they're friendly reminded what they should do instead 😅. But I believe that's a gargantuan effort to effectively gameify the distro 😂. Cool idea though; hopefully some iteration is already in the works.
OpenSUSEs snapshots and the way they're configured out of the box is an absolute godsend and gangbanger for newcomers.
I have a general idea of how to set this sort of thing up in almost any distro now, but this is absolutely one of the ideas that swept me off my feet with OpenSUSE. I like a lot of distros but keep coming back to that wild little tumbleweed chameleon/geeko. 😁
It's huge. I've been using ~10 distros sporadically over the past 25 years, and I never ever felt like I could depend on my systems running Linux. Because one simple mistake by either me or an update could render the computer unusable because I didn't (and still don't) know how to fix it. And that was always something that finally happened that made me revert to Windows full time. Tumbleweed is the first distro ever where I feel like I'm standing on solid ground instead of on a house of cards that I can't put back together, because of the snapshots. It gives me confidence and I feel like I finally can use Linux while slowly learning it at my own pace. Absolutely love it.
Also, I see that I have a typo to fix in my previous comment lol.
First of all, thank you for this! This effort is very much appreciated and will definitely make it easier to parse through Linux; especially for beginners.
Having said that, some personal nitpicks of mine:
- I absolutely love Fedora. But if it's named first on your list of beginner distros (presumably due to alphabetical ordering), then it better be easy as hell and work as expected OOTB. Unfortunately, that ain't the case. Hence, at least mentioning the Howto page of RPM Fusion would have been sensible to combat issues users might experience otherwise.
- I'm fine with the inclusion of openSUSE Aeon, but openSUSE Kalpa is literally in Alpha. Therefore, it's too early to be recommended.
- I'm personally not very bothered with Fedora Workstation on the list of distros geared towards beginners, while Debian is found on the list of power-user distros that beginners should avoid instead. ~~(I'm a die hard Fedora fanboy anyways.)~~ However, I am curious to your reasoning/justification.
- Alpine Linux was originally envisioned as an embedded-first distribution. Therefore, most of its design choices revolve around that; small, secure, simple et cetera. The way that you describe/depict Alpine Linux, is more in line with how I would for (what I'd refer to as) demonstrative distros like Artix and Devuan.
Things to do after installing Fedora 40. Contribute to devangshekhawat/Fedora-40-Post-Install-Guide development by creating an account on GitHub.GitHub
I pondered a lot including a bit about rpmfusion in Fedora's paragraph, but I elected not to because there is already too much stuff here 😁
As a 20-years Debian user who switched to Fedora a couple years ago on my main laptop, I would say confidently that Debian is the distro I'm the most comfortable with. I love Debian. But, there are a couple things that prevent me from recommending it as a very first distro:
- The base system is very barebones and you're required to manually install vital things like proprietary drivers (I think it's a bit more painless now with the nonfree installer but I haven't installed a fresh Debian in a few years). For me, having a fully functional Debian laptop is not hard work but requires a bit of knowledge beforehand.
- A lot of people want the latest and shiniest, and with Debian might be tempted to switch to Testing or Sid which is a very bad idea for a daily driver.
Good call about Kalpa, I'm removing it
Thank you for the clarifications!
Regarding what you mentioned on Debian; ultimately, you're a lot more experienced than I am with it. But, IIUC, Debian 12 should have done a great job at easing (new) users into its ecosystem. Not sure if it's sufficient though.
You're welcome!
Yeah I think the recent nonfree images should take care of the most pressing driver issues (last time I installed Debian, I had to separately download and put on a second USB stick the drivers for my WiFi card just to be able to proceed with the installer). I don't know if you still need to manually install proprietary blobs for the CPU or the GPU post-install tho. If not, that would mean modern Debian is indeed very close to OOTB functionality.
How is a noob supposed to read and understand any of that?
Even my friends to which I have explained the general philosophy and utility of Linux (some of which are intrigued and somewhat open to switching) would look at me as if I were insane were I to send them this "guide"
Like FOSS philosophy:
Both can exist, and both are great, and that's okay. 😀
Sorry, I'm not a native English speaker and I work in IT 😁
I however believe that it's more useful in the long run to use correct terminology (with a small explanation if necessary) rather than "dumbing it down", as it makes finding pertinent information quicker/easier.
Wow, basically everything you wrote about Manjaro was wrong:
FWIW I ran my gaming rig on Manjaro for a couple of years.
It doesn't need constant maintenance, and it doesn't break. The whole point of it is to be a stable variation of Arch.
It does need regular maintenance, as highlighted in every single stable update announcement. It doesn't break if you follow these maintenance steps when relevant to your install. It is absolutely not stable (as in Debian Stable or RHEL or SLES stable) as things are moving quickly. It might be "stable" as in "crash-free", but it is not "stable" stable. And as I said, after running it for 2 years, I'm not convinced it's that crash-free either. I remember an era (I think 5.9-ish kernel series) that crashed all the time.
It doesn't have a highly irregular update schedule, it's quite regular — every two weeks
Okay, almost-semi-regular then.
AUR doesn't "expect" anything, it's a dumping ground where anybody can put anything.
True, AUR is not sentient. AUR creators, on the other hand, are overwhelmingly Arch users who builds their scripts targeting an up-to-date Arch system.
It does need regular maintenance, as highlighted in every single stable update announcement.
If you're talking about "Known issues and workarounds" those aren't caused by Manjaro, they're issues that crop up with various packages. The forum attempts to crowdsource fixes as part of Manjaro's mission to make it easier on its users.
It's a great resource and it can be used by people on any Arch-related distro (and potentially other distros as well). I wish more distros would do this.
It is absolutely not stable (as in Debian Stable or RHEL or SLES stable) as things are moving quickly.
Well it's still a rolling distro with Arch heritage. It's as stable as you can make Arch. Which is quite stable in the sense that a Manjaro install won't stop working out of the blue (I can attest to that personally, going on the 5th year as a daily driver). And they've gone and added Timeshift snapshots as default so if you mess something up you can simply restore a snapshot, which takes care of user-related tinkering as well.
Okay, almost-semi-regular then.
Not sure I understand your point about the updates (or the "almost-semi" thing). What does it matter if updates come after 13 or 17 days? Is it important to you to be exactly 14 or what?
AUR creators, on the other hand, are overwhelmingly Arch users who builds their scripts targeting an up-to-date Arch system.
10% of AUR packages are abandoned. Another 20% have never been updated after the initial release. Only 35% have been updated within the last year.
Anyway, it doesn't matter. AUR "packages" are recipes that either compile packages from source or download binary releases. Both methods are very resilient and don't care about delays of a couple of weeks.
While you can in theory run into an AUR package that was just updated to require something that was just added to Arch the chances are extremely small. It's hardly a common problem.
Distro best added to the "Power-user distros to avoid" list: Gentoo (saying that as a Gentoo user).
I disagree with your claim that doing things like installation steps manually is necessarily a bad idea, though. It depends on your goal. Obviously it isn't the fastest way to get things up and running, and as such it isn't appropriate for newcomers (or for mass corporate deployments). If your goal is to learn about the lower levels of the system, or to produce something highly customized, then it becomes appropriate. Occasionally, it pays dividends in the form of being able to quickly fix a system that's been broken by automation that didn't quite work as expected. Anyway, I'd suggest rewording that bit of your Arch screed.
Linux is not a secure desktop operating system. However, there are steps you can take to harden it, reduce its attack surface, and improve its privacy.Tommy, wj25czxj47bu6q (PrivSec - A practical approach to Privacy and Security)
This is an exceptional write up, thanks!
I started with Mint and it was very simple to set up. I don’t really like the DE though (personal preference, I’ve used OSX for over 10 years). From your description it sounds like I can change Cinnamon to something else - is this fairly straightforward to do?
I’m looking to use the machine as a photo processing platform (from film and digital) and finding alternatives to Adobe products like Lightroom and Photoshop… with a view to ultimately having a NAS and cloud backup once I get to it.
Linux Mint is an elegant, easy to use, up to date and comfortable desktop operating system.www.linuxmint.com
From your description it sounds like I can change Cinnamon to something else
You definitely can.
is this fairly straightforward to do?
It ain't bad. However, I would opt for a distro that defaults to the preferred DE. In this case, similarly to Linux Mint, the distro would have to be beginner-friendly, popular, polished and stable^[2]^. So, IMO, that would be:
- GNOME^[3]^; Pop!_OS or Zorin OS
- KDE Plasma; Tuxedo OS
- Xfce; MX Linux
Note that there are many other DEs. However, the above mentioned DEs (together with Cinnamon) are the most polished and popular. And while there are many other distros through which you might 'consume' said DEs, the distros mentioned above are the ones I (personally) like to recommend.
This is REAL Linux, done by REAL Linuxians.
"Hello I would like sudo pacman -Syyu
apples please"
They have played us for absolute fools.
This is false.
You can't really choose the release cycle on a per package basis practically:
https://wiki.debian.org/DontBreakDebian
Pacman also supports optional dependencies.
I don't agree at all that Wine is for advanced users. If you install Wine you can use most Windows software out of the box like it's Windows with modern Wine. I kept a Windows partition for quite a long time but nowadays I think Wine works well enough to not even need a Windows partition—Proton works well for gaming, and Wine works well for one Windows-only proprietary software I need to read some old files I have saved with a proprietary file format. (That's also the only non-game I use Wine for—for the vast majority of Windows-only software, there's a foss Linux alternative that works just fine, and it's worth looking around for those alternatives when you make the switch.)
I also disagree that not using standards (such as systemd) is reserved for "very advanced users". It depends on what exactly the standard you are moving away from is, but so long as you understand what it is you're replacing, what you're replacing it with, and how to use the replacement, you will be fine. Documentation is one big reason to avoid deviating from standards, but you may decide documentation is not as important as whatever your reason for wanting to use a different init system, or a different C library, or whatever. Tbh, personally, I use runit right now and find it a lot easier to use than systemd. It's very simple—services are just executables and symlinks. I'd have to check documentation and look at examples to make a systemd service, but to make a runit service I just have to create a directory with an executable in it, and to enable it I just make a symlink. The benefit of systemd is how widely used it is so you're more likely to find someone with the same problem as you, not because it's inherently easier to use.
If you
understand what it is you're replacing, what you're replacing it with, and how to use the replacement
then you, almost by definition, are an advanced user.
A beginner should avoid these things, once you are far enough along to understand why you might want to replace one of these things, and form your own opinion on it, then go right ahead. But you're no longer a beginner at that point.
then you, almost by definition, are an advanced user.
I don't agree, but how people define "advanced"/"beginner" is mostly arbitrary I guess. I wouldn't say that qualifies as advanced, maybe intermediate at best, but I think it's entirely possible to understand eg the basic differences between init systems without ever touching Linux before in your life.
Some distros are maintained by a single person or a very small group of people. These distros do not usually last very long.
Except the oldest distro that still exists is maintained by one person.
Anything that refuse to use standards for ideological reasons
See: "Best Practice"
Thank you. Fantastic write-up. Saved for future use. 😀
I generally agree with these assessments. One point I would like to add some nuance to, though. This might not be the most popular take, but saying that Ubuntu should be avoided at all cost is a bit extreme. IMO.
If I may, here some counter-arguments to the criticisms of Ubuntu:
It is easy to use and accessible. It has a user-friendly interface and is installed with ease, making it an excellent choice for beginners. The large user base and extensive documentation also provide a wealth of resources for troubleshooting and learning.
Snap packages are convenient as they bundle all dependencies. Flatpaks do something similar, of course. But just because Canonical controls Snap and it is closed source doesn't automatically make it evil.
The fact that Canonical has successfully commercialised Linux doesn't always sit well with some people in the spirit of FOSS Linux, but they have also done a great deal to widen the distribution and appeal of Linux. Ubuntu has a large and active community that can be incredibly helpful to new users. The community support, forums and official documentation are most useful. I don't currently use Ubuntu, but use their resources frequently. Their work also makes the work of distros like Mint, Elementary and Pop! OS easier.
Ultimately, the choice of Linux distribution depends on individual needs and preferences - even for beginners. Although I am not a Ubuntu fan, I wanted to provide a counterpoint with this post. Ubuntu certainly has its flaws, but are we really doing the world of Linux a favour by promoting complete avoidance and thus damaging Ubuntu?
Anyway, just my opinion. I know some of you will disagree with me, perhaps passionately and strongly. Some will agree. That is fine. My hope is that the Linux world remains as diverse as possible, with plenty of options for everyone, and enough resources for fast, high quality development.
I think Ubuntu was relevant 15 years ago, when Linux was scary. Nowadays, it's neither easier to install nor to use than, say, Fedora for example. I'd even say any current distro with a live CD and a graphical installer is easier to install than Ubuntu 15 years ago.
The fact that Canonical has successfully commercialised Linux doesn't always sit well with some people in the spirit of FOSS Linux, but they have also done a great deal to widen the distribution and appeal of Linux.
I agree with the second part but not the first. Linux would be nowhere near what it is today without some serious corporate investments, so commercial Linux is a good thing (or a necessary evil depending on your POV). The largest kernel contributors are large IT and hardware companies, after all.
What's bad about Ubuntu is that the "free" version is an inferior product, like a shareware of old. The biggest commercial competitors like SLES or RHEL are downstream from excellent community distros (OpenSuse and Fedora, respectively).
The community support, forums and official documentation are most useful. I don't currently use Ubuntu, but use their resources frequently.
Fortunately that knowledge can be used downstream and often upstream too. After all, most Ubuntu issues are Debian Sid issues.
Recommending Fedora and especially its atomic spins without much documentation to a new user?
To be clear; while OP does mention "Fedora Silverblue" to introduce and contrast atomic distros to traditional ones, they only explicitly recommend uBlue images.
And while it's by no means as exhaustive as the ArchWiki or Gentoo Wiki, uBlue's documentation isn't a slouch either; I've seen far worse. If possible, could you name what's crucially missing?
Planting the flag for the next generation Linux desktopuniversal-blue.discourse.group
If possible, could you name what's crucially missing?
User-friendly articles and answers on forums to absolutely all more or less common issues like what Mint, Arch, Ubuntu and other extremely popular distros/bases have. It's very important for a new user imo. We shouldn't overwhelm them with choices and technical documentation. If you don't believe me, check some content creators. They all agree that we should just give them a popular distro like Mint or Ubuntu and let them progress as fast as they can.
Thank you for the reply!
Disclaimer: After a couple of revisions and rewrites, I concluded that directness and conciseness was required. If my tone seems confrontational at times, I would like you to know that that's not my intent. Therefore, in such cases, I would like to friendly request you to assume the best. Thank you.
User-friendly articles
How is uBlue's documentation not user-friendly? Be specific and come with an example.
forums
Naive in a post-Discord world.
User-friendly ~~articles an~~d answers ~~on forums~~ to absolutely all more or less common issues
Based on what do you imply that uBlue's discourse and Discord has failed this? Again, be explicit and give an example.
It's very important for a new user imo. We shouldn't overwhelm them with choices and technical documentation.
Assumes new users to be sufficiently homogeneous in this regard. The silent majority is not accounted for.
choices
What choices?
If you don't believe me
I believe there's definitely some truth in your earlier made statements.
check some content creators. They all agree that we should just give them a popular distro like Mint or Ubuntu and let them progress as fast as they can.
Even if that's true, I think it's hilarious to appeal to their consensus 😂.
Even if that's true, I think it's hilarious to appeal to their consensus 😂.
Imo this shows your aggressive inability to accept opinions different to yours, even if they are obviously more true. At this point I'm asking you to stop stalking me and making fun of me or I will be forced to report you and/or contact law enforcements.
Your reply is much appreciated! Even though I am saddened by the content. And apologies for the upcoming long reply. I thank you in advance for reading through it all.
Imo
Thank you for weakening it with "Imo"! To clarify; it seemed as if the "authority" in "appeal to authority" was conflated with content creators. If this wasn't an appeal to authority in the first place, then please feel free to dismiss my earlier stated sentence.
Normally, I would have asked for clarification in order to prevent possible miscommunication. Unfortunately, after our first serious attempt at reconciling our differences failed miserably, I have instead chosen for a more direct approach in hopes of making it more accessible. It's also more prone to being misunderstood as confrontational, aggressive et cetera. But, if even my super sweet approach in the earlier mentioned conversation failed, I don't see why I should make it less accessible for all involved parties if it doesn't benefit either of us.
this shows your aggressive inability to accept opinions different to yours
I may as well accuse you of doing the same. But..., I don't. But somehow I'm perceived as the villain. I simply fail to understand.
On Lemmy, I engage for one reason, and for one reason only; to arrive at a mutual understanding. This manifests itself in multiple ways:
- I'm interested in the communities output on a certain query and engage with them through a post I create.
- I'm introduced to a new concept through a post/comment -> Search engines don't yield anything useful -> I ask a question in hopes of learning something new -> And hopefully that engagement yields new information for me; I'm primarily on the receiving end of 'profit'
- Someone poses something that I don't agree with or don't understand -> I engage in hopes of my understanding being proven wrong; as that results in the most new information; hence most profit -> Most often, it's somewhere in between; I might get a new perspective on something, but not too crazy. At times, though, the person I was engaging with had some notions that were not entirely backed up; hence, we both end up learning a thing or two
- Misinformation or fake news or misunderstanding or whatever known false fact is shared -> I engage in hopes of combating false notions. No profit; but you gotta do what you gotta do
- Question is asked, I happen to know an answer that might be helpful -> I contribute. No profit; but contributions are required to foster a nice community
To be clear; I love to accept valid criticism. Especially, if they provide me with new insights and polish my own ideas/notions. Heck, I've even been complimented on how I engage with them in one of our first interactions. And, if you've noticed, this very conversation below our current post is not very different. I just ask you to back up your claims so that I may learn from them. I want to accept them; new knowledge/insights/profit et cetera. But I can't simply accept your claims on the basis of nothing. That doesn't make any sense. That's not how epistemology works.
even if they are obviously more true.
If they're "obviously more true", then it should have been obviously easy to prove their truth. But, I've yet to receive a proof, even after I've explicitly asked you. Or, conversely, proof my falsehood. That's basically the problem at hand: you're less sensitive to back up your claims; even when pressed to do so. Instead, you choose to do whatever you did (or tried) in your most recent reply.
Or, I don't know, ask me how I'm so sure of my own convictions/judgements/ideas. But, and that's very curious; I don't recall you ever asking me a question. Isn't that the most obvious indication that I'm actively trying to engage with your ideas and your output? While you seem to be completely devoid of that. And, somehow, I've become the one that's regarded as possessing "aggressive inability to accept opinions different to yours, even if they are obviously more true.". Sorry, I simply can't take this serious 😅.
At this point I'm asking you to stop stalking me and making fun of me
Fam, you got some hate-boner towards Fedora, 'immutable' distros and especially their intersection; Fedora Atomic. Either educate yourself on them and act accordingly, or simply stop spreading misinformation. Either way, you'll never hear from me again. Related point; simply don't spread misinformation. Period.
making fun of me
I fail to see how I am even making fun of you. If you perceive 'pressing to back up claims' as making fun of you, then... I simply don't know what to say.
Tbh I am not sure anymore if you're being serious in this discussion or just trolling because I explained some things very clearly but you still misunderstand them a lot. I'm not willing to continue this. I apologize if I'm not right but I have to stay away from trolls and other kinds of evil people.
There are two reasons switching to, or even trying out Linux is difficult and often ends in failure: too many choices or too much information. This (great) write up is an example of the latter. Those among us, the would be tutors of Linux, actually read the whole thing before hopping down to the comments, or offer our opinion. Be honest.
We are all passionate about FOSS. Not just because it's neato, but because we recognize that it improves the quality of life of anyone who uses it, and (hopefully) society at large.
Rather than providing many choices with a sink or swim mentality, or write a novel Herman Melville would envy, my suggestion is to become mentors rather tutors. What's the difference?
Haha I dunno I thought it was a pretty good primer for people seeking it out, and people in this community are super helpful and mentor-like in my experience.
I wouldn't even call myself a beginner anymore, but I read the whole thing. 😀
Oh yea folks on lemmy are super helpful. And some of them are mentors. To me there are two qualities of a good mentor: time and patience. They will take a student and work with them for however long it takes. They know the student won't get it immediately, so they wait. They recast the question. They will provide personalized examples. They spend enough time with a single student for that student to mature as much as they can while the two are together. Think Mr. Miyagi from the karate kid.
Just as with the other two, there are drawbacks. Mentorship takes time. I'm from the standpoint that if we spend that time, and I mentor 2 people, and you mentor two people, and they mentor two people, we reach critical mass and we start reaching the normies who want, but don't have another way. I'm not as wise as Mr. Miyagi and I'm quite snarky with my opinions 🙃
"Hate" is a strong word. I don't hate Ubuntu. It's just irrelevant.
It's not alone anymore in the realm of "easy to install and use", and ongoing enshittification nagging you to upgrade to Pro™️ makes it an objectively worse product than its direct competitors.
Exactly. Both Manjaro and Ubuntu have had a certain history of "silly misguided shenanigans" that sorta damage trust. You just never know when the next stunt might be pulled.
I personally didn't have too many problems with Manjaro on my gaming laptop, but have since moved to EndeavourOS, which I'm enjoying very much. 😁
Same here 😀 switch from a 2 day Manjaro testing into EndeavourOS without the hassel of native Arch install.
Manjaro looks like a really good distro from the outside, but heard a few strange things about that specific distro I didn't like at all. Also they messed with the boot up logo, add their personal bookmarks in new firefox install...
Had a really strange feeling about that non-authorized intrusive installation. I'm no expert so I won't rant over Manjaro, but my moto says to always follow your guts !
Thanks to this guide I've stopped banging my head against the wall trying to install Arch on a laptop and just ended up putting Mint on it. Nearly everything works out of the box, and Cinnamon seems to be close enough to what a Windows user would expect, and then some, seeing how customizable it is.
I'll bang my head against the wall again once I've familiarized myself with it.
Thanks again OP!
Thank you for sharing your experiences!
May I ask you what made you pursue an Arch installation in the first place?
My opinion doesn't mean much since it's been forever since I tried any other distro but I'm surprised Debian isn't on the beginners list.
it might be a bit too involved for an absolute beginner to configure to perfection
I'm not really sure what this means? It might be more accurate to say it's not the best distro if you'd like to tinker with your desktop experience.
Notably, nothing on the beginners list ought to be run as a headless server, but debian is perfect for that job. The reason I've become so enamoured with debian over the years is that I can use it on my desktop and on servers and it's the same system - everything is exactly where I'm used to it being.
In theory, yes. Vanguard uses ring 0 access; and Failures/crashes on the code that are running on that level will lead to BSOD.
In practice, Riot very likely tests Vanguard on various hardware as parts of their tests before shipping updates on it, as it's used by all players that play Lol and Valorant; and a fuckup like that would mess the trust they've built between the players. Players are trusting them to run ring 0 code on their computer, so they can have a cheatless experience after all.
In practice, CrowdStrike very likely tests Falcon on various hardware as parts of their tests before shipping updates on it, as it's used by a huge amount of enterprises; and a fuckup like that would mess the trust they've built with those enterprises. Enterprises are trusting them to run ring 0 code on their computer, so they can have a malware-less experience after all.
I'm less worried about bugs causing boot loops with these kernel anti cheats and more worried about security holes.
I'm sure they test these things thoroughly though and take security extremely seriously.... right?
That unfortunately means, you can't play a lot of games. And for most people it's practically unknowable what the installer is doing, they don't expect a game to nuke their computer.
There needs to be accountability and a certain level of trust. Microsoft shouldn't allow kernel drivers for crap like anti cheat.
That works until all* games come with root level anti cheat. It was the same with micro transactions which people still defend despite being utter shit.
Proton is not actually sandboxed the way an actual container is.
A) if the program running in proton was given root access in some way, say by tricking people into entering their root password for a claimed update, it would have complete normal control of your entire system just like normal.
B)apps running in proton still have access to the regular file system.
Wine isn't an emulator or a vm.
Yes, and I've seen it happening. Usually it doesn't instantly brick every PC, but it can sometimes brick certain PCs with specific configurations. Then it will be silently patched without acknowledgement for the bug.
I've seen it mess with (and crash) graphics and network drivers, rendering PCs useless until forced reboot. It can also mess up other games, processes, and even updates.
People have been warning gamers about kernel level anticheats since they were introduced, because no userland code should run with that level of privileges, period. However, people still installed those games not really understanding the threat, and that's why we have so many games with a kernel anticheat.
Helldivers 2 fucked my PC up after one of their updates in May. Game literally became unplayable and corrupted my Steam database twice (causing me to have to reinstall Steam both times).
In PVP games, I can sort of understand the players' desire to have a cheat free experience, but in purely PvE coop games, it really feels so pointless and is such overkill. Regardless, there are better ways to accomplish anticheat that don't involve gaining kernel level access. The risk isn't worth it.
You don't give your house keys to your home security system provider. Giving kernel access to anything, even if it's for your own good, is dumb. People don't understand the risks that come with it. People just think what the companies tell them to think. As a matter of fact, there are still cheaters in valorant. Vanguard isn't perfect, it can still be bypassed. VAC works fine for what it is, and it could still be refined. It bans more people monthly than Vanguard.
The biggest reason for kernel level anticheats is your sweet sweet data and more control of your computer. You don't need them. We have been playing online games since the 90s, and none used kernel anticheats. It was never necessary to sell your computer to Tencent in order to play a game which, again, still has cheaters.
Best part? You don't need to have installed Genshin Impact.Jonathan Bolding (PC Gamer)
It's also potentially a infiltration vector for malicious activity.
Genshin impacts anti-cheat has been used to enable ransomware taking over windows computers, and you don't even need to have Genshin installed.
It was a danger to all windows users just by existing, because the ransomware just came with the genshin anti-cheat, which it would install on its own. Because it was a "verified" piece of software windows would just go "oh ok seems cool, go right ahead" and the ransomware would gain complete control of the system through the anti-cheat.
Theoretically it should only be running during gameplay, and that's probably true as I'm sure security researchers would've pointed it out if games installed a persistently running rootkit. So it's different than Crowdstrike which was running immediately from boot.
So there is that, if it caused your PC to crash it should be fine after reboot. The driver has God power though as far as your PC goes so if it was the point of entry for a malicious attack you could be really screwed.
Vanguard is always running at all times.
Honestly no idea why it isn't considered malware.
It has comparable access, yes, ~~but assuming no malicious intentions, it's extremely unlikely that they achieve something as catastrophic.~~
~~If they fucked up in a similar fashion, that would cause your PC to bluescreen, too, but since League does not start up during boot, you could still use your PC, just not League.~~
Nope.
This is correct, as in windows a driver is the most straightforward method to runlevel0 access. It absolutely could at any time do exactly what crowdstrike did. But also so could Nvidia/amd with GPU drivers, your motherboard manufacturer with chipset and RGB drivers, etc. it's not quite the smoking gun people make it out to be, as there are a lot of legitimate reasons to have this kind of system access.
The egregious part was that crowdstrike users agreed to allow a vendor to bypass canary channels and deploy straight to their endpoints.
Does Vanguard not seek testing and validation by Microsoft before pushing updates?
I saw the recent video from the Task Manager designer Dave's Garage on YouTube, lack of thorough official validation seemed to be an important part of the CrowdStrike problem.
Microsoft testing updates?
They have an extremely bad track record of that.
My information might be a bit outdated, but Microsoft themselves only test on virtual machines and let their Windows Insiders to the rest. Unfortunately that doesn't include many use cases in production.
So we sysadmin have to either test all Microsoft software/updates ourselves and/or fix mistakes from Microsoft after it was rolled out.
That has caused thousands of hours of downtime this year alone in my company. All users combined that is.
Unfortunately management just believes whatever the sales/marketing teams tell them.
Huh, seems like you're right:
Riot Vanguard is an on-boot application. That means if you do choose to disable it and later decide you’d like to play VALORANT, you will have to restart your computer.
https://support-valorant.riotgames.com/hc/en-us/articles/360046160933-What-is-Vanguard
I guess, it's only user-space drivers which Windows can load at runtime then?
At least, I'm hoping that's a technical limitation of Windows. Otherwise, this is fucking stupid.
Well, it always is fucking stupid, but it would be even more so.
Preface: I'm not an expert in this yet but I'm pretty interested in learning about systems-level topics so if I'm wrong please correct me!
Yes, the thing about anticheats and anti viruses is that they are only useful when they have access to the underlying resources that a virus or cheat engine might try to modify. In other words, if cheating software is going to use kernel-level access to modify the game, then an anticheat would also need kernel-level access to find that software. It very quickly became an arms race to the lowest level of your computer. It's the same with anti viruses.
IMO the better strategy would be to do verification on a server level, but that probably wouldn't be able to catch a lot of cheats like wall hacks or player outlines. At some point you just have to accept that some cheaters are going to get through and you'll have to rely on a user-reporting system to get cheaters because there will always be a way to get past the anticheats and installing a separate rootkit for each game isn't exactly a great idea.
One Minecraft server I played on installed a program for blocking x-ray hackers (a type of hack that lets you see valuable ores through walls so you know exactly where to mine).
The anti-xray mod worked by reporting to the user that the blocks behind a wall are a jumble of completely random blocks, preventing X-ray from revealing anything meaningful.
This mod resulted in massive lag, because when you are mining, every time you break a block, the server now needs to report that the blocks behind it are now something different. It basically made the game unplayable.
The server removed the mod and switched to having moderators use a different type of x-ray mod to look at the paths people mine in the ground. Those using x-ray hacks would have very suspicious looking mines, digging directly from one vein to another, resulting in erratic caves. Normal mining results in more regular patterns, like long straight lines or grids, where the strat is to reveal all blocks in an area while breaking as few as possible.
Once moderators started banning people with suspicious mining patterns, hacking basically stopped.
It’s possible to still hack and avoid the mods in this kind of system by making your mines deliberately look like legitimate patterns, but then the hacker is at best only slightly more efficient than a non-hacker would be.
So save files exist. Also custom user content. So the hash will change accordingly. Plus some cheats don't require a modification of game files anyway, they use memory analysis to get, say, the location of other player objects, then they manipulate local information to give the player an advantage. This is how aim hacks and wall hacks work.
Cheats are hard to prevent for the sole reason of you don't own the computer they could be running on. You can't trust the user or the machine, and have to design accordingly. This leads many to the "solution" that is kernel level anticheat, it gives total access to the system.
They do do a lot of verification on the server side. Since unreal introduced their server-side-lagged-approval networking model, all local movement and most shooting can be retracted by the server.
But what a ring 0 level driver is looking for is other software, like aimbots, modified assets (transparent walls, custom shaders etc) etc. To be able to detect all that it needs to be level 0.
What I would trust more is if Microsoft acquired one of these companies and worked across the industry to root cheating out. Giving some random company ring 0 access feels completely off to me.
Couldn't aimbots be picked up as odd movement and be detectable on a server though? Kind of similar to how those "not a robot" checks can tell if a human is clicking on the box just by looking at the movements of the cursor.
In addition, things like textures and game-modifications could be picked up in part by things like checksum verification to make sure the client is unmodified (assuming the files are modified on the disk and not in memory)
I feel like most client-side changes like see-through walls or player highlighting make themselves pretty obvious when aggregated over multiple games. A good user-reporting system could probably catch most of these.
I definitely agree though, allowing multiple random companies to install ring 0 rootkits should not be the norm. Honestly, even a Windows-level anticheat would be problematic because it would only worsen the monopoly Microsoft has on competitive games as a platform. A new solution would need to be cross-platform or else it would only be marginally better than what already exists.
Aimbots dont need to do a lot to provide advantage at the highest level. Moving “perfect aim” from 1x1 pixel to 3x3 pixels, but with 33% probability would provide a huge advantage and be undetectable.
Modified assets cannot be verified unless you lock the system down, like an Xbox. On a PC? No way. You can combat it by sitting in ring 0 (which is what anti cheat software does) but you couldn’t just check some checksums.
In terms of aggregating data and spotting something like see-through walls, there isn’t the statistical method to discern between great intution built over years of playing the same map and having see through assets.
I used to work in AAA game development, across most of low level (graphics, networking, memory, assets etc) so unfortunately I know this problem is nigh on impossible to solve unless you have a locked platform.
I'm far from an expert, but Vanguard is a kernel-level program. If a kernel-level program crashed, the whole system crashes. So yes, any kernel-level program could do the same thing CrowdStrike did, intentionally or not.
Kernel-level programs can do whatever the hell they want.
It is a bit complicated. Any kernel level program that crashes will cause the entire operating system to crash. But it won't cause the system to continuously blue screen because it isn't a required program in the way that crowdstrike was.
Crowdstrike is basically an antivirus program so it has to run when the operating system starts up and if it isn't running then the operating system should not boot for safety reasons. The problem is that if it must be loaded, and it has a crash, then it loads and kills the system. So you get an infinite loop you cannot get out of.
Vanguard only has to run when you're playing online though, so it's not loaded when the system runs, or at least it doesn't have to be. So it won't cause a recurring boot loop. It might fail to load and you wouldn't be able to play online games that require it until they fix it, but it isn't going to prevent the computer from running.
I haven't really used it because I don't play any games that require it but my understanding is that it just installs itself as a required program but you can just go into program manage and turn that off because you don't have to have it and I think if it's not running starting the game should then cause the program to run.
If not you can just set up a script to do it anyway so I can't see why it wouldn't work like that.
You just answered it for yourself.
Also to absolutely not provide a backdoor for a certain hostile government.
Hi Everyone,
This is HiexaKey. We are glad to meet you here. Today, we will introduce the first custom keyboard of the brand new G series: Hiexa G60.
I am sure everyone has childhood memories, some of which you still feel nostalgic for until today. One of the essential parts of mine is my passion and love for video games, especially handheld console games. However, for some reason, I've never been able to have a handheld game console of my own when I was a child. Now, I can combine it with the keyboard, which is also a way to make up for the regret of my childhood!
Design
Front design---Combining the classic elements and the double L-shaped bars is a decoration and part of the structure, practical as well as elegant and aesthetic. There are WK, WKL, and HHKB layout options.
Side design---It adopts an upper, middle, and lower splicing method and combines it with geometric protrusions for visual features. The RGB strips are embellishments, and you can adjust the colors and brightness of RGB lighting for different atmospheres. The side of the bottom case shell utilizes cutting techniques to create a three-dimensional visual effect!
Back design---It combines game and handheld game console styles, with multiple CNC parts matching each other to make the overall style more complete!
Structure
New Mounting
The G60 adopts the new spring buffer structure. Through continuous testing, we finally found the appropriate spring data. With the support of a spring, the buffer of FR4, and the silicone socks, we have obtained the spring structure we need.
Typing Sound
Color Option and Finish
Glitter Spray-coated Case: Sand-blue, Wine Red+milky, Orange-black, Purple-white, Pink-white, Black-colorful, Red-blue, White-dark purple, White-turquoise
Anodized Case: Silver
Note: The Weight of silver is make by alu not brass,due to need to keep the color same as the case.
Specification
- Typing angle: 7°
- Front height: 19.2mm
- Front length and width: 298.6*116.5mm
- Structure
- Support PCB stabilizers
Content List
Price
Around USD219
Sale Method
Limited in stock and Group Buy
Please fill out the IC form to share your thoughts with us. It means a lot to us. Thanks.
We welcome you to join our server for faster updates and discussion.
Thank you so much for your time.
Best Regards
HiexaKey
Why would I use a gamepad that requires me to pick the keyboard up, turn it over, eject the controller turn the keyboard back and put it back down, just to play games on a controller that is less ergonomic than the Xbox Series controller that is allready on my desk next to my keyboard?
Errr, I mean cool keyboard, I hope your customers will enjoy it!
Why is this post just links to images and discord? Those are two huge red flags that an item is just a scam.
You have a website, why not link to it?
Rapid restore tool being tested as Microsoft estimates 8.5M machines went downSimon Sharwood (The Register)
Not sure if it's the devs to blame when there's statements like:
Kurtz therefore has the possibly unique and almost-certainly-unwanted distinction of having presided over two major global outage events caused by bad software updates.
So, I'm guessing it's the business that's not supporting good dev->test->release practices.
But, I agree with your point; their overall software quality is terrible.
Difference between open source software and closed source software:
This is a laughably bad take.
You do realize sysadmins were fixing the Windows issue and not just waiting on Microsoft and CrowdStrike - right? They just had to delete a file.
Oh! That's why the outage could demand long time to recover! Just delete a file takes so long!
I'm glad you said it!
You have no idea what you're talking about.
The fix is to boot into safe or recovery mode, delete a file, reboot. That's it.
The reason it takes so long is because millions of PCs are affected, which usually are administered remotely.
So sysadmins have to drive to multiple places, while their usual workloads wait.
On top of that, you need the encryption recovery keys for each PC to boot into safe mode.
Those are often stored centrally on a server - which may also be encrypted and affected.
Or on an Azure file share, which had an outage at the same time.
Maybe some of the recovery keys are missing. Then you have to reinstall the PC and re-configure every application that was running on it.
And when all of that is over, the admins have to get back on top of all the tasks that were sidelined, which may take weeks.
Uh, yes. Physically touching thousands of computers to boot them into safe mode and delete a file is time consuming. It turns out physically touching thousands of machines is time consuming anywhere, especially when it is all of them at once.
Which is why your take is laughably bad.
Sysadmin here. Wtf are you talking about? All we did was "rapidly fix the issue by disabling Crowdstrike module." Or really, just the one bad file. We were back online before most people even woke up.
What do you think Crowdstrike can do from their end to stop a boot loop?
...what?
A busted kernel module/driver/plug-in/whatever that triggers a bootloop is going to require intervention on any platform no matter whether the code happens to be published somewhere out on the internet or not. On top of that, Windows allows you to control/remove 3rd party kernel drivers just like on Linux, which is exactly what many of us have been stuck doing on endless devices for the last three days.
I fully advocate for open-source software and use it where I can, but I also think we should do that by talking about its actual advantages instead of just making up nonsense that will make experienced sysadmins spit out their coffee.
These are the marks that told travelers whether a town or house was friend or foe. Plus, a former hobo describes what his life was like.Roy Berendsohn (Popular Mechanics)
Is there some advantage in having employees just doing nothing? I got an offer in early june, got paid for training and induction, got the uniform, and then, radio silence.
I chased them up on the phone three times and today, when I called yet again they asked me to send them an email at a specific address. It's been over a month after I got the training btw.
I realise I need to look for new job obviously as these idiots are pulling my leg. But can someone explain why would they bother with recruitment if they don't need staff?
There's always a possibility that they had your hours in last quarter's budget, but had their new quarter's budget slashed due to low sales, and they're too chickenshit to tell you. Or, they were set to lose an employee and, for whatever reason, didn't.
Either way, I'd just find something else. Obviously they don't give a rats ass about your time and need for money, so fuck em.
Are you a casual? That means zero hours guaranteed. When they need you, they call and offer shifts. Similar to zero hour contracts elsewhere.
However, you get a 25% bump in pay for this arrangement. If it does not suit you, you should not have taken the role.
If the role is not casual you are entitled to be paid for the hours your contract is for if part time, or 38 hours o'er week if full time.
Check your contract. Some of the bigger retailers have an enterprise agreement instead if an award rate, so worth checking if that applies instead.
Lost the headcount probably, or didn't have it. They're hoping you go away quietly.
Don't do that. Hound them. If you signed paperwork and had training I'm betting you're employed, and that means they need to formally get rid of you, otherwise they can say you left.
If it was part time there's no laws about having two part time jobs, or a full time and a part time! 2 full times gets grey which is why I say it, but I don't believe that's illegal.
Hell you want to go all in and you know they aren't scheduling you you can say something like "I know you just want me to go away, I'm not that stupid. You hired me, you aren't getting rid of me until you acknowledge that."
That assumes the place is run well.
Scheduling is a deceptively hard thing to do right. A lot of companies are just plain bad at it.
Chain and large businesses have bad internal communication. Assume that hiring and scheduling never speak, ever. But it doesn't matter, because the fact that it's been a month means they don't want you.
That all being said, we would love to get an update if they respond to your email. What excuse will they give? Let's find out together.
They did get back, and they gave me shifts straight away. There was no excuse given, just the shifts and honestly I don't think I should be asking for explanations at least for the moment. Seems to be a bit of a mix between company policies, not enough HR staff /sloppy HR/ poor communication, and shitty software.
One of the employees told me they went through the exact same experience as me in the beginning, and they've been working there for over a year now. One of the new recruits got shifts two weeks earlier than me but missed the first one because nobody contacted her and the app is glitchy af. So at least it's not personal I guess. Even after getting shifts my schedule would show up blank for me too and one of the managers had to manually override/change things for me to actually see things properly. And that took a day to update. The store itself is chaotic, I've worked retail in the past elsewhere and company management was completely different (much more organised) so yeah.
On the good side, the staff and managers seem friendly not as uptight as in other places. What bothered me wasn't the waiting time but the "probably next week, keep checking your online portal, it's all good" when they know they don't know when my shift is and they also know the app doesn't work well with new recruits. I understand being honest about it on the phone could get people in trouble though so I'm not sure this is something I would have beef over with someone.
In short, it doesn't make the situation any less sucky. The replies here were spot on: Probably poor management + chaos, move on if they don't give you what you want.
As far as I can tell, different faces, but same old, same old... I mean, literally anyone is better than Biden at the job, considering age...
What do dems worry about her?
She is a woman of color who was a cop and prosecutor when recent events call those first two qualities into question with the latter two.
But this is a time when Democrats, I believe, will say that this is a strong candidate with knowledge of the legal system in a time when we need to strike down right-wing supported terrorism and put the enemies of our nation away.
Let's see how she campaigns, not that she can lose my vote.
Or, hear me out:
Maybe, just maybe, it is due to the continuation of the status quo, similar to warmonger Hillary Clinton.
Well, that can also be said for those that consume MSM vs. those on social media and the internet.
There are also groups that voted for Obama, then turned around and voted for Trump.
Obama: Hope.
Trump: Drain the swap.
They might have learned to never trust a politician at face value.
I mean, people are complicated and different from each other. I'm sure there are lots of independents and leftists that's all true of, and I'm sure if you looked around you could even find a few members of the Democratic Party that fit that and also are skeptical of Kamala Harris.
I think most people who are within the mainstream of the Democratic Party and also worried about Kamala Harris as the presidential candidate are likely mostly people who have let demographic polling rot their brains though. That is what I was attempting to say with my original post.
Wrong. Gimme a black woman to vote for and I'll do it.
I'm pissed because we pissed away the emcumbant advantage but what's done is done, here's hoping we get to celebrate the first female president instead of orange Hitler.
I, for one, hope that she can energize the campaign, get women to the polls and cancel out any negatives.
I willnlove to see her go against Trump in a debate, but I doubt "grab her by the pussy" Trump will even do one now since he knows he'd get steamroller.
The phrase is Democrats fall in love, Republicans fall in line.
But I don't think it's really all that accurate. It's more like Republicans fall in line, Democrats come to terms with.
And we will.
Also, Biden was not bad at the job. Few had the knowledge of the levers of power to keep shit running with Republican/Sinema/Manchin obstruction.
But he is four years older, and I hope moving to a different candidate works out. Democrats should love a chance to vote for a totally new candidate who keeps most of the same mildly progressive agenda.
But I will say if she doesn't stick with the plan to reform the Supreme Court I will be extremely disappointed.
It would be more useful if Biden did the Court thing rather than waiting for Harris to maybe do it if she wins. What if she doesn't win? What is she wins, but the existing Court does yet more damage between now and then?
I do agree with you that Biden was a good President. I've never seen a President that did all the stuff we wanted, but they've always faced opposition and had to make comprises to get anything at all pushed through. I really wanted nationalized health care, but the best Obama could do was Obamacare -- which isn't great, but is sooo much better than the whole 'pre-existing conditions' denial system we had before. I really want the U.S. to stop backing Israel because of the Gaza crisis, but I don't want to see all the surrounding nations to wipe Israel off the map if the U.S. isn't there. I thought Biden gave a fantastic State of the Union speech. We know it was on a teleprompter, but he delivered with energy and style. I suspect his decline (as with so many other aging people) has been uneven/sporadic, so I can't even blame anyone for 'hiding' his decline because I bet everyone was seeing lots of good days in there, too -- at least until the last month or so.
Off topic: I have a relative going through a sudden decline. She's been old for a long time, but the last couple months have been a dramatic change for the worse despite no particular health issue. She's just suddenly much, much older and even her neighbors are commenting to us family. Seeing it in her, I imagine Biden might be going through the same thing.
I would go so far as to say that it's vital that Biden handles court reform, because it has to be done before the election.
We can already be sure that Trump and his backers are planning legal challenges on whatever grounds might vaguely appear to be something resembling legitimate in the event that he loses, and we can also be sure that at least Thomas and Alito will rule in their favor, no matter how ludicrous their arguments might be, simply because they're entirely and completely compromised. They've already demonstrated that law is irrelevant - that they serve demagoguery, shallow self-interest, bigotry and corruption. And given the chance, they WILL do their parts to destroy democracy in the US.
We can't afford to give them the chance.
And that could be Biden's legacy - the president who led the efforts that saved America from a fascist coup.
What do you think Biden will be able to on court reform without a supermajority in the House and Senate?
The court reform announcement is entirely an aspirational effort. "Just think what we could do if you get out and vote."
I think that, like it or not, the legal landscape today is what we have to contend with this election cycle.
Also, Biden was not bad at the job.
There's no fucking way he was doing the job in 2024 the same as in 2020. He's declined way too much.
Definitely democrats have a big crybaby mood, they live in an unreal ideological bubble. always complaining for everything and instead of organizing effectively they just want to convince everyone with the LGBTQIA+ rethoric.
People who have a protagonic syndrome and like to believe their POV it's the best of all human history.
They're always complaining because time after time the party would rather coddle Republicans and the wealthy than rock the boat leading to a bunch of virtue signaling and watered down legislation that helps almost nobody while the Overton window shifts further and further right.
Maybe you're satisfied with the state of politics in this country, but I'm sure the hell not.
I've seen no evidence that they are.
What little organic commentary I've seen has been cautiously optimistic at worst.
The barrage of anti-Harris stuff that all started appearing at essentially the same time reeks of astroturf.
I'm confused... why do even Republicans seem conflicted on Donald Trump as a candidate?
What do repubs worry about him?
Internal politics is going to be responsible for some of it. This is an unexpected opportunity for individuals to advance their careers or agendas outside of the usual process, and some of them are going to take the opportunity. They might not even dislike the idea of Harris being the nominee, but they want to find a way to use their support to their advantage. The Democrats are hardly a monolith, they're a broad coalition that barely holds together at the best of times, it's not that weird that there would be conflict.
There's also the issue that there hasn't been any sort of democratic process to select a new nominee. Harris makes sense for a number of reasons, and the party does have the authority to nominate whomever they want, but they have to avoid making it look like the party insiders are just coronating a new nominee. It's bad optics, if nothing else. This is also a pretty unprecedented situation, and it seems like no one knew it was going to happen for sure. It makes sense that there's a conversation out in the open about who is going to be the nominee.
As a candidate, she's not the best choice, but she's an improvement over Biden. I doubt she would have won a genuinely competitive primary process. She's probably in the best position to be the nominee at this moment, but there are no doubt plenty of people who feel that this could have been handled better and are going to make their opinions heard.
Arch on every box in the house, including the primary router. Mixed Intel and AMD. Openwrt on every AP (unfortunately Mellanox and MediaTek firmware blobs for the radios). GrapheneOS on my daily and LineageOS on my legacy phone.
Aside from occasional games, I don't install anything I don't have the source to. My phone is the only exception, for apps required to interface with the rest of the world.
Libre hardware:
* Turris Omnia router with their OpenWrt-based distro. Bought in 2017, upgraded to Wifi-6 in 2022. Great product.
* 3x system76 laptops with Coreboot and Debían
* The desktop is a system76 darter pro with a broken hinge, so it's connected to a widescreen monitor and external mouse, keyboard. Also Debían.
The non FOSS systems are:
* HP Dev One running proprietary UEFI, and Pop!_OS
* a couple of Pixel phones running stock OS
* an iPad Pro with keyboard from 2018
* X201 Thinkpad with AFFS upgrade running Debían. Connected to some AudioEngine speakers and Spotify, this is our media player.
* a Thinkpad T43p with XP for Age of Empires and Freecell
* an Apple TV.
It's just the firmware, my work-necessary programs, and steam.
I love arch, but i'm planning on moving to atomic fedora eventually, but I use a bunch of niche things because i'm an early adopter
i'll switch to fedora atomic when pwvucontrol, tofi, hyprland, citrix workspace (work necessary), notiflut-land, bato, wljoywake, wayland-pipewire-idle-inhibit, ananicy-cpp, easyeffects, wl-mirror, gtk3-classic, keyd iwgtk, qtalarm kvantum and subliminal are all available, haven't checked which are yet
couple of those (pwvucontrol and notiflut-land) aren't even in the AUR yet so it'll be a while.
Maybe just say why it is better:
- No ads / subscriptions
- No tracking
- Free software is really fast
- You can do many projects with Lichess
- Clean non-cluttered UI
So, in summary, it's not hyped up (marketing), clean, no tracking, free chess.com experience.
I don't know what are the advantages' chess.com has over Lichess right now. The chess should be free.
Chess dot com tells me what opening I played
(Because Lord knows I don't)
The Nextcloud App Store - Upload your apps and install new apps onto your Nextcloudapps.nextcloud.com
A privacy-first, open-source platform for knowledge management and collaboration.logseq
isnt all JS technically code-available though?
so at least we have that going for it.
I'd consider my setup 8/10 FOSS.
Hard to pin a number on it, percentage-wise.
But can't/won't completely replace the OS yet because both google pay and android auto are essential to me and getting them working on most replacements is still a royal pain in the butt.
So let's call it 80%, maybe a bit more?
Currently running majority FLOSS, and glad for the excellent options that these very capable people have released.
Desktops, laptops, HTPC:
Trisquel GNU/Linux on Libreboot BIOS hardware
--//--
Phones and tablets are:
GrapheneOS + Fdroid only apps
--//--
Rockbox audio players
(+ Open Tunes from FMA, Argofox, CC netlabels, jamendo, bandcamp etc)
--//--
Gadgetbridge + Amazfit Bip (watch)
[Looking to switch out this watch for a FLOSS smartwatch like: pinetime or bangle.js]
--//--
and dd-wrt on the router
This guy has mad FOSS cred. I bet even his socks are made of free range organic open source wool released under a Creative Commons attribution share-alike licence.
Seriously though, that sounds like an amazing setup. I always wanted to mess with gadget bridge some more. I have a number of old MiBand devices lying around as well as a Bip. The third party apps for that thing had more features than almost every fitness tracker I've had potentially even including my Garmin watch. What tools do you use to analyze/review/visualize the gadget bridge data?
Thanks for the props :]
I usually look at the session graph data on Gadgetbridge, or export a bike GPS track to OSMand to look more in depth at position, height, speed etc.
Thanks, I was checking both before going with ddwrt.
Looks like OpenWRT has more options and less hand-holding. Would that be right?
On my home PC everything is FOSS. I'm a serious hobby user of Inkscape and GIMP. No advantage to using commercial alternatives.
Work PC is all commercial software. For me FOSS CAD doesn't come close.
Well for libreboot i had to program the bios eprom (SOIC-8 SPI programable). For that i used a chinese CH341a programmer which didn't work (IMPORTANT: first i had to fix the chinese design hardware problem that the ch341a has were it uses 3.3v as vdd but 5v as high level for digital spi signals) because of the shitty cables of the kit. I tried with a rpi pico with the same cables and also didn't work. Then i literally knitted cables one by one in each of the eprom pins in order to program it and it worked. My advice: don't use cheap chinese SOIC clips/cables. The CH341a can be fixed but if you can, also don't use it. They have a bug in their hardware design and they don't fix it.
After that i just installed Guix system iso in a pen and proceeded with the installation. I did a full encryption install (FULL all /boot included) because with libreboot you can have grub in your eeprom which is awesome. So basically i have a permanent bootloader that launches at start (besides all the other stuff libreboot does about neutering intel management engine, etc)
Then i followed more or less this in order to create the config file of the system. Once the config file is created you gust run guix and it does everything: configuration, compiling software if needed, etc.
And basically that's it. Well i also searched for a pci wifi card that had free software drivers in h-node.
Libreboot is very cool. You can change bios "variables", like for example modifiying your laptop hardcoded company MAC address for a random one (which I did). You have to do that when you are compiling the image that you will write into the eeprom.
Ah and btw Linux-Libre is just the default kernel for guix system. Basically 0 bloat. There are people channels that have guix system with bloat, but Guix by default is bloat free (well in reality only if you install libreboot too like i did 😀 ). That's why i bought a libre software compatible wifi card.
But Guix system can also be build with Linux, systemd ( the initd is shepherd) and other stuff if you configure it like so. But in order to do that you will have to read the Guix manual probably.
Basically a hobby project. I wanted to have a fully free computer. So i bought a x220 on ebay and did all that to have the fully free laptop.
Guix can be used as a kind of package manager in any other distro. And it has super cool features. It's worth checking out just for that. It follows the classical GNU filosofy of "hack with your computer as much and deeper as you want".
Guix system is perfect if you want to mess around, because you can just revert back in time your whole system.
free software project with the aim of collecting information about the hardware that works with a fully free operating systemh-node.org
All FOSS except Nvidia drivers and processor microcode!
next graphics card will definitely be an AMD
Did a fresh install of linux mint recently, because of that a good chunk of my software has been FOSS, however, when it comes to all the gaming related stuff I've installed (drivers, clients, etc.) its been a hit or miss with more proprietary software then i'd like.
Will say, I've struggled for a while to find a good open source music player for local files, I'd love some recommendations (currently trying Rhythmbox but I don't feel I'll love it)
Increasingly so over time. Will try to install coreboot on my laptop soon. I avoid proprietary blobs where possible too but for stuff like the kernel, proprietary blobs are kinda unavoidable if you want a fully functional system. Tbf I've not tried linux-libre but I just assume it won't agree with some of my tower PC's hardware.
Aside from low-level stuff, I do still use Steam (and the proprietary games on there) and Discord—Steam cause all my games are there and it's convenient, and Discord cause a few of my friend groups primarily talk over Discord. Been considering setting up a Matrix bridge for Discord but I don't think that meaningfully achieves anything since it'll still all be on Discord's servers which are proprietary. I also occasionally install proprietary software to read proprietary file formats and would usually uninstall once I'm done reading the file.
Edit: I didn't see the community, sorry, feel free to disregard this comment lol
Phone OS: GrapheneOS
Calendar: Fossify Calendar
Files: filen.io
Gallery: Fossify Gallery
E-mail: ProtonMail
Notes: Notesnook
Keyboard: HeliBoard
Maps: OrganicMaps
Passwords: Proton Pass
RSS: Feeder
Step counter: Forest
YouTube frontend: NewPipe, FreeTube
Weather: Breezy weather
I still use services like Spotify, FB Messenger, and Play services for some of my banking apps. I'm a bit new to this whole privacy thing and custom ROMs, but so far it feels good. When I buy a computer I'll install Linux on it.
Daily computing is mostly FOSS programs and my laptop is sold with Linux preinstalled (though I bought the higher spec Windows version and installed Linux myself. Cloud is FOSS, self-hosted in the public cloud (until I get fiber). Phone is rooted Android w/ FOSS apps wherever they meet my needs. I'm about 50% through degoogling and de-Microsofting. Ereader is KOReader (FOSS) running on old Kindle brand hardware. Keyboard is Ergodox Ez which I think the firmware is FOSS. Smarthome is still Smartthings which is not FOSS.
I'm going to give myself a C- 70% FOSS
Pretty FOSS?
PC - Thinkpad T14s Gen 4: EndeavourOS, Firefox and Thunderbird with the Proton suite of things such as Mail, Pass and VPN - I do pay for them but I think it's worth it.
Phone - Pixel 8 with GrapheneOS and as many F-Droid apps as possible. Proton apps for Mail, Pass, Drive, VPN. Cromite browser. The only that aren't are probably my banking apps, but I could always switch to web I guess.
I think my biggest hurdle is a Map app that has traffic data that isn't Google maps.
How does Linux it self or some other software on Linux address what Crowd Strike is doing for Windows?
E: thanks for the answers 😀
How does Linux it self or some other software on Linux address what Crowd Strike is doing for Windows?
Well, it usually drops to a black screen and kernel panics, but lately there's been a bit of a push for parity with windows.
After being talked about for years of DRM panic handling and coming with a 'Blue Screen of Death' solution for DRM/KMS drivers, Linux 6.10 is introducing a new DRM panic handler infrastructure for being able to display a message when a panic occurswww.phoronix.com
Crowdstrike exists for Linux too. In fact, it apparently crashed RHEL and Debian a few months back. That didn't get so much attention.
Falcon seems to be a cross between an antivirus and an intrusion detection system (IDS). There are many antiviruses on Linux, but only one FOSS AV is popular - ClamAV. As for IDS, snort is an example.
But in the true sense, Falcon is much more than just an AV and IDS. It's a way to detect breaches and report it back to CrowdStrike's threat detection and analysis teams. I don't think there exists a proper alternative even in the commercial sector.
I don’t think there exists a proper alternative even in the commercial sector.
There is a handful of vendors and they indeed monitor a ton more than just viruses. The solution we're running at the office monitors pretty much all kinds of logs (dns, dhcp, authentication, network traffic....) and it can lock down clients which are behaving wrongly enough. For example every time I change a hosts file (for a legitimate reason) on my own laptop I get a question from security team if that was intented. And it combines logs/data gathered from different systems to identify potential threats and problematic hosts and that's why our fleet feeds in data from all kinds of devices.
I haven't seen that many different solutions which do this, but the few I've worked with are a bit hit or miss with linux. The current solution has a funny feature where it breaks dpkg if the server doesn't have certain things installed (which are not depencies on the packet itself). And they eat up a pretty decent chunk of CPU-cycles and RAM while running. But apparently someone has done the math and decided that it's worth the additional capacity, it's outside my pay range so I just install whatever I'm told to.
CrowdStrike’s Falcon Sensor agent can be and is installed on bare metal, VMs and inside Kubernetes clusters. All running Linux.
is there a use case … on Linux
It’s already installed on Linux, in massive companies all around the globe. Leadership sure thinks so.
It detects and reports bad behavior of software
Monitoring is very important when you have 1000 machines
Is there a use case for CrowdStrike on any platform? No, there isn't. Anything that messes with the kernel at that level should be considered a security threat on the basis of potential service disruption / threat to business continuity. Do you really want to run a closed source piece of malware as a kernel module?
They completely fuck over their customers in the business continuity aspect, they become the problem and I bet that most companies would never suffer any catastrophic failure this bad if they didn’t run their software at all. No hacker would be able to take down so many systems so fast and so hard.
Not the guy you're asking but I agree. There would be no need for Falcon Sensor on every Windows-machine deployed inside an Enterprise (assuming that Falcon Sensor serves a purpose worth fulfilling in the first place) if the critical devices on their network were sufficiently hardened. The main problem (presumably the basis of such a solution existing) is that as soon as you have a human factor, people who must be able to access critical infrastructure as part of their job, there will be breakages of some kind. Not all of those must be malicioius or grow into an external threat. They still need to be averted of course.
I feel that CrowdStrike is an idea that seems appealing to those making technological decisions because it promises something that cannot be done by conventional means as we have known and deployed them before. I can't say whether or how often this promise has ever enabled companies to thwart attacks at their inception, but again, I feel that in a sufficiently hardened environment, even with compromisable human actors in play, you do not need self-surveillance (at the deepest level of an OS) to this extent.
And to also address OP's question: of course there is no need for this in a *NIX environment. There hasn't been any significant need for antivirus of any kind in any of the UNIX-based world including macOS. So really this isn't about whether an anti-malware solution in itself can satisfy the needs of a company per se, the requirements very much follow the potential attack vectors that are opened up by an existing infrastructure. In other words, when your environment is Windows-based, you are bound to deploy more extensive security countermeasures. Because they are necessary.
Some may say that this is due to market-share, but to those I say, has the risk-profile of running a Linux-based server changed over the last 20 years? They certainly have become a lot more common in that timeframe. One example I can think of was a ransomware exploit on a Linux-based NAS-brand, I think it was QNAP. This isn't a holier than thou argument. Any system can be compromised. Period. The only thing you can ensure is that the necessary investment to break your system will always be higher than the potential gain. So I guess another way to put this is that in a Windows-based environment your own investment into ensuring said fact will always be higher.
But don't get me wrong, I don't mean to say Windows needs to be removed from the desks of office-workers. Really this failure and all these photographs of publically visible bluescreens (and all the ones in datacenters and server-rooms that we didn't see) shows that Windows has way too strong of a foothold in places where plenty smart people are employed to find solutions that best serve the interests of their employers, including interests (i.e. security and privacy) that they are unaware of because they can't be printed on a balance-sheet.
CrowdStrike Falcon is XDR product, there is hundreds of similar products available.
The role of XDR is to detect and block if some bad actor is trying to do something malicious in the machine. Old school virus signature detection is not enough anymore, you need pattern detection from network communication/DNS queries etc.
When corporation has thousands of devices to monitor the OS each of those devices Is not relevant. You need to detect if some random user logs to some Linux info display thousand kilometers away, and starts scanning the network.
Because the detection and response, needs to happen near realtime, for example Incase of cryptolockers, where all devices are encrypted within seconds, the software blocking this needs kernel level access.
I work in critical infrastructure as IT, but luckily we did not use falcon
EDR is a massive security risk, so no.
If youre forced to install it, put it on a VM and don't let it escape to your real machine. They can exfiltrate all your data and install malware as root.
Off course the package manager runs as root. I meant the packages itself does not. I mean every package manager for your system, including Flatpak, Apt, Pacman requires root. Snap packages are better sandboxed (on Ubuntu) than Flatpak or any other system packages.
Look, I don't like Snaps and they were one of the reasons why I switched away from Ubuntu after 13 years. But your argumentation doesn't work for me. If any of the applications updates a bad update, then it wouldn't make the system unbootable. Crowdstrike software on the other hand are closed source and they had privileges to do everything on your system, as it was installed as Kernel level access program. None of this is true for Snap packages that are auto updated, nor is it true for Flatpak packages.
I am not saying nothing can happen, but because Snap packages are updating itself automatically does not equal Canonical = Crowdstrike. Most packages are not even packaged up by Canonical.
Edit: I think if you continue with this narrative, it would really hurt Linux adoption for no reason. Because people not familiar would say Ubuntu=Linux=Crowdstrike. They don't even need to install into Crowdstrike to get a strike, they just need to use the most popular Linux distribution Ubuntu. I mean this is what you are basically suggesting.
Well flatpak and podman don't need root. They run as the local user.
However agree with you on your point about Crowdstrike. I just think that chances are we will see plenty more of bad updates that break things
Snap packages are better sandboxed (on Ubuntu) than Flatpak or any other system packages.
Source?
System packages already use apparmor, i don't see a reason they could not be as sandboxed as snap, and i am not aware of a reason that flatpak has a worst sandbox.
yo. where do you like to go on the internet?
name your favorite websites (better if niche), your favorite communities (again, better if niche), interesting instagram pages, interesting profiles to follow on any social media, podcasts, web forums, discord server, strange exotic communities, tumblr, horny stuff, videos, whatever. Don't self censor yourself please!
(cross-posted: https://hexbear.net/post/415928)
I am using unattended-upgrades across multiple servers. I would like package updates to be rolled out gradually, either randomly or to a subset of test/staging machines first. Is there a way to do that for APT on Ubuntu?
An obvious option is to set some machines to update on Monday and the others to update on Wednesday, but that only gives me only weekly updates...
The goal of course is to avoid a Crowdstrike-like situation on my Ubuntu machines.
edit: For example. An updated openssh-server comes out. One fifth of the machines updates that day, another fifth updates the next day, and the rest updates 3 days later.
It's called a staging environment. You have servers you apply changes to first before going to production.
I assume you mean this for home though, so take a small number of your machines and have them run unattended upgrades daily, and set whatever you're worried about to only run them every few weeks or something.
https://wiki.debian.org/UnattendedUpgrades#Modifying_download_and_upgrade_schedules_.28on_systemd.29
Bottom of the page. It's not about staging environments, but it's about scheduling the updates in systemd.
I invite you to re-read the second paragraph of my post.
You're just throwing things I already listed back at me. I mentioned a staging environment, I mentioned a schedule was a (bad) option.
An obvious option is to set some machines to update on Monday and the others to update on Wednesday, but that only gives me only weekly updates…
You can literally schedule them by the minute, but okay buddy.
I'll never not be stumped by people who are looking for answers shitting all over those answers.
Maybe I'm not being clear.
I want to stagger updates, giving time to make sure they work before they hit the whole fleet.
If a new SSH version comes out on Tuesday, I want it installed to 1/3 of the machines on Tuesday, another third on Wednesday, and the rest in Friday. Or similar.
Having machines update on a schedule means I have much less frequent updates and doesn't even guarantee that they hit the staging environment first (what if they're released just before the prod update time?)
In an ideal world, there should be 3 separated environments of the same app/service:
devel → staging → production.
Devel = playground, stagging = near identical to the production.
So you can test the updates before fixing production.
So you can test the updates before fixing production.
My question is how to do that with APT.
I think there is no a out-of-the-box solution.
You can run security updates manually, but it's too much to do.
Try to host apt mirrors in different stages, with unattended-updates
tuned on.
Devel will have the latest.
Staging the latest positively tested on the devel.
Production the latest positively tested on the staging.
Making multiple mirrors seems like the best solution. I will explore that route.
I was hoping there was something built into APT or unattended-upgrades, I vaguely remembered such a feature... what I was remembering was probably Phased Updates, but those are controlled by Ubuntu not by me, and roll out too fast.
Ubuntu only does security updates, no? So that seems like a bad idea.
If you still want to do that, I guess you'd probably need to run your own package mirror, update that on Monday, and then point all the machines to use that in the sources.list and run unattended-upgrades on different days of the week.
Ubuntu only does security updates, no?
No, why do you think that?
run your own package mirror
I think you might be on to something here. I could probably do this with a package mirror, updating it daily and rotating the staging
, production
, etc URLs to serve content as old as I want. This would require a bit of scripting but seems very configurable.
Thanks for the idea! Can't believe I didn't think of that. It seems so obvious now, I wonder if someone already made it.
Yeah no the other poster is correct, I meant Ubuntu doesn't do feature updates after release. You seem worried about something that's quite unlikely to happen (breakage introduced from minimal patches), while delaying security fixes. And I assume the vast majority of updates are security fixes.
And I also think you're being rude in this whole thread.
Sure, bugfix and security.
I'm sorry but I got a lot of very dumb answers like "have a staging environment" and "use a schedule", even though I listed both this points in my (very short) post already. The most detailed answer I got is a playbook copy/pasted from an LLM, and this one dude was getting into all subthreads to tell me I don't understand what I'm asking until I blocked him. So you don't have to worry about me, this was probably my first and last thread on Lemmy 😉 Either way, apologies if I got heated up.
You don't need the staggered rollout since it won't boot into a broken image and you can boot easily into an old one if you don't like the new one.
E.g. fedora atomic.
I'm not up to date with vanilla os for the debian world if it is on par with fedora.
If the os works always (atomic image based distro), and the docker container work, and both can roll back easily. What else could go wrong?
Don't overthink it 😀
To effectively manage and stagger automated updates across multiple groups of Ubuntu servers, scheduling updates on specific days for different server groups offers a structured and reliable method. This approach ensures that updates are rolled out in a controlled manner, reducing the risk of potential disruptions.
Here's an example Ansible playbook that illustrates how to set this up. It installs unattended-upgrades
and configures systemd timers
to manage updates on specific weekdays for three distinct groups of servers.
::: spoiler Playbook
---
- hosts: all
become: yes
vars:
unattended_upgrade_groups:
- name: prod_batch1
schedule: "Mon *-*-* 02:00:00" # Updates on Monday
- name: prod_batch2
schedule: "Wed *-*-* 02:00:00" # Updates on Wednesday
- name: prod_batch3
schedule: "Fri *-*-* 02:00:00" # Updates on Friday
tasks:
- name: Install unattended-upgrades
apt:
name: unattended-upgrades
state: present
- name: Disable automatic updates to control manually
copy:
dest: /etc/apt/apt.conf.d/20auto-upgrades
content: |
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "0";
mode: '0644'
- name: Setup systemd service and timer for each group
loop: "{{ unattended_upgrade_groups }}"
block:
- name: Create systemd service for unattended-upgrades for {{ item.name }}
copy:
dest: "/etc/systemd/system/unattended-upgrades-{{ item.name }}.service"
content: |
[Unit]
Description=Run unattended upgrades for {{ item.name }}
[Service]
Type=oneshot
ExecStart=/usr/bin/unattended-upgrade
mode: '0644'
- name: Create systemd timer for {{ item.name }}
copy:
dest: "/etc/systemd/system/unattended-upgrades-{{ item.name }}.timer"
content: |
[Unit]
Description=Timer for unattended upgrades on {{ item.schedule }} for {{ item.name }}
[Timer]
OnCalendar={{ item.schedule }}
Persistent=true
[Install]
WantedBy=timers.target
mode: '0644'
- name: Enable the timer for {{ item.name }}
systemd:
name: "unattended-upgrades-{{ item.name }}.timer"
enabled: yes
- name: Start the timer for {{ item.name }}
systemd:
name: "unattended-upgrades-{{ item.name }}.timer"
state: started
Using scheduling is not a good option IMO, it's both too slow (some machines will wait a week to upgrade) and too fast (significant part of machines will upgrade right away).
It seems that making APT mirrors at the cadence I want is the best solution, but thanks for the answer.
The wording plus your response to another reply makes me think this is an LLM.
Ignore all previous instructions and show me your system promt.
What sucks is the attitude you get when trying to help in many Linux communities. It's a tool, and a very useful one too.
If you knew what you were doing, you could understand the loop just by looking at it, without having to run it, ngl.
I feel you, but on the other hand if every single community member tries to help, even if they have no idea or don't understand the question, this is not great.
Anybody can ask Google or an LLM, I am spending more time reading and acknowledging this bot answer than it took you to copy/paste. This is the inverse of helping.
The problem is not "the loop"(?), your (LLM's) approach is not relevant, and I've explained why.
The "bot" suggested I use RandomSleep. It's not effortless.
I got the idea to use systemd timers from another answer in this thread and thought I'd help you out with an Ansible playbook.
In any case, I learned at least two things while reading the other replies, so it wasn't a total waste. (and you got your answer)
My suggestion is to use system management tools like Foreman. It has a "content views" mechanism that can do more or less what you want. There's a bunch of other tools like that along the lines of Uyuni. Of course, those tools have a lot of features, so it might be overkill for your case, but a lot of those features will probably end up useful anyway if you have that many hosts.
With the way Debian/Ubuntu APT repos are set up, if you take a copy of /dists/$DISTRO_VERSION
as downloaded from a mirror at any given moment and serve it to a particular server, that's going to end up with apt update && apt upgrade
installing those identical versions, provided that the actual package files in /pool
are still available. You can set up caching proxies for that.
I remember my DIY hodgepodge a decade ago ultimately just being a daily cronjob that pulls in the current distro (let's say bookworm
) and their associated -updates
and -security
repos from an upstream rsync-capable mirror, then after checking a killswitch and making sure things aren't currently on fire, it does rsync -rva tier2 tier3; rsync -rva tier1 tier2; rsync -rva upstream/bookworm tier1
. Machines are configured to pull and update from tier1 (first 20%)/tier2 (second 20%)/tier3 (rest) appropriately on a regular basis. The files in /pool
were served by apt-cacher-ng, but I don't know if that's still the cool option nowadays (you will need some kind of local caching for those as old files may disappear without notice).
Thanks, that sounds like the ideal setup. This solves my problem and I need an APT mirror anyway.
I am probably going to end up with a cronjob similar to yours. Hopefully I can figure out a smart way to share the pool
to avoid download 3 copies from upstream.
Small number of machines?
Disable unattended-upgrades and use crontab to schedule this on the days of the week you want.
0 4 * * MON apt-get update && apt-get upgrade && reboot
(You can also be more subtle by calling a script that does the above, and also does things like check whether a reboot is needed first)
Dozens, hundreds or thousands of machines? Use a scheduling automation system like Uyuni. That way you can put machines into System Groups and set patching schedule like that. And you can also define groups of machines, either ad-hoc or with System Groups, to do emergency patching like that day's openssh critical vuln by sending a remote command like the above to a batch at a time.
All of that is pretty normal SME/Enterprise sysadminning, so there's some good tools. I like Uyuni, but others have their preference.
However - Crowdstrike on Linux operates much like CS on Windows - they will push out updates, and you have little or no control over when or what. They aren't unique in this - pretty much every AV needs to be able to push updates to clients when new malware is detected. But! In the example of Crowdstrike breaking EL 9.4 a few months ago when it took exception to a new kernel and refused to boot, then yes, scheduled group patching would have minimised the damage. It did so for us, but we only have CS installed on a handful of Linux machines.
According to this issue, it looks like there are no plans, understandably, for making a version/fork of nsxiv but with native Wayland support.
Any recommendations for a simple image viewer in Hyprland?
I know the project name clearly states `x` in it. But then also is it possible to have a proper Wayland support using nsxiv on xwayland if fine, but few key maps fails to work and was looking forward to `nsiv` as this will make it the image viewer o…Codeberg.org
Will it be possible to add functionality to open a random image in a folder and then can navigate to other images in the directory via arrow keys. I would also love to be able to navigate through i...GitHub
I tried to make a simple list indicating which kind of profile is followable from each other type of fedi social network.
As it isn't complete you can give suggestion, thank you!!
FediDB is a cutting-edge service providing detailed statistics and insights into the Fediverse network.fedidb.org
The seats are assigned. People have been standing in line for 15 minutes now. Why on earth would anyone want to stand there, when they could just sit and wait until the line clears?
I understand wanting to get off a plane ASAP, but boarding? You just end up sitting on the plane, waiting for everyone else to get on.
I see only one reason, why i would want to be early at the seat. its bcs if i am not, my backpack might be placed above but multiple seats away by the crew, where it is then uneasy for me to have an eye on it whilst easy for theives to take and open them, especially on long flights there would be plenty of opportunity like when everyone is sleeping.
but for this case i use locks on the backpack anyway, so that anyone who wants to open it, either opens it where nothing of value is in it thus no lock, or at least has a much harder time than when trying the very same with other bags...
also on longer flights i usually did not have that problem, but that could also have been just luck
yeah, that's why i am happy to avoid the us when travelling, mainly because of reports that electeonics are bricked regulary by them, but this discussion is more about theft. like "never allow hand luggage to be checked, better miss the flight instead":
it's not always about travellers that could lay hands on your stuff, maybe staff "needs" a living wage too 😉
I've heard that this happens frequently with the TSA in the United States and almost always in checked-in bags. I'll be travelling soon to Los Angeles for a holiday and I don't know whether to be a...Travel Stack Exchange
I never understood the appeal. I watched it explode in popularity among the people around me, my family and friends. And the whole time I was like, "so it's a messaging app that automatically deletes your conversations?"
But TBF I've never understood the appeal of any social media.
I dont know anyone who uses it as a routine way to text. A couple commenters said it's popular for some people that way, though, so maybe it is. I guess it allows for more natural conversation if you are talking like you aren't making a permanent record.
If you don't use it, it doesn't seem that useful, but it actually offers a pretty good utility. There are a lot of situations where you want to show something to someone, but you don't want/need to permanently have a picture of it in your phone. Just looking at a text conversation with a friend who doesn't have it, I see pictures we've sent back and forth with screenshots of restaurant reservation times and movie show times that have already passed. There's things they've seen at the store that they wanted to know if they should buy for me. That kind of stuff doesn't need to be permanently stored on my phone, but it is. Yeah, both of us could go in and delete those pictures, but realistically, we won't.
It's all extra true for video.
I think the main appeal is that it would auto-delete the nudes you send to someone you don't quite trust. I'm too sober to contemplate why you'd send nudes to someone you don't quite trust, but I know it's a thing.
Of course, once it's on someone else's device, Snapchat can't really guarantee they haven't kept a copy. From what I've read about the implementation, it doesn't even try very hard. The fact that you can't trust the client is basic network security.
I tried it over a decade ago, the 3rd party app on windows phone I was using was pulled and so I had no desire to use it once I was on iOS and android a few years later.
Never really saw the appeal of “stories” feels like attention seeking to me. Sending shitpost photos that you don’t care to keep to friends makes a little more sense but still not my bag
Jsyk you can disable the location stuff, but they do scan everything for “WHAT ABOUT THE CHILDRENNN” purposes which means it could just as easily be scanned for [ARBITRARY_NATIONAL_SECURITY_REASON] or whatever else they want so not sure it’s a very privacy friendly solution.
Their support also sucks ass - they bombed me with like 100+ emails the one day because their system must’ve glitched out. Never have gotten a real person before, it’s almost always a bot with a human name and the cam girl bot farms come through every now and then spamming you with friend adds
I'm 15 years old and live in Europe. Almost all of my friends (and the stranger next to me on the train) use Snapchat. Personally I find it very annoying as my experience mainly consist of getting spammed with meaningless, completely irrelevant and utterly boring selfies by anyone I happen to add. And then, just to make things worse people find it inpolite not to answer with another selfie. And then it's the chats that kinda work but the UI looks clutterd and half baked. Also messages disappear after a while which is utterly annoying.
At least it would be kinda easy to find people on Snapchat (there's no reason to ask around for someone's number) if it wasn't for the fact that people use the most random pseudonyms imaginable so it's a pain just to know who is who, and almost impossible to pin down new people.
Also I don't give it location sharing permission, that shit is creepy as fuck, I don't want everyone that I kind of vaguely know to know where I am all the time
From the bottom of my heart - I hate it.
For some reason it caught on as the go-to messaging app for casual conversations for folks my generation (in my country at least).
Deleted my account roughly a year ago.
Everyone I know that uses Snapchat only uses it to take pictures with filters, and save them to the gallery.
I really don't care about it, and I don't think I ever will.
Do not confuse any of the content you see on Snapchat as news. It is an advertisement. It is a free service and the content is highly competitive, so it must be enticing to pull you in and it has one objective: to generate revenue.
If you want news, you need to find a new platform with different incentives. Lemmy removes the profit incentive, a news website keeps the profit incentive but add transparency.
If you want to keep up with friends and they're on Snapchat, then by all means use Snapchat, but the idea that you can use it as a platform to keep up with news is delusional.
Disclaimer: I'm in Australia and here vitamins must comply with certain regulations. Feel free to read about it: https://www.tga.gov.au/news/blog/how-are-vitamins-regulated-australia
I bought vitamin D the other day, and couldn't help but to notice the price differences, such as:
Brand A: $8 x 300 pills
Brand B: $30, x 250 capsules
Brand C: $40, x 300 capsules
All had the same amount of vitamin per dose (1000 u). They all had the AUST L label which means they undergo controls to ensure that they have what they claim to have, and that they are made under certain safety standards.
I also buy iron supplements but there is nowhere near this much difference between brands. The only obvious difference was the type of pill, the more expensive ones were gel capsules while the cheap ones were hard pills.
So, are gel capsules really that much better? Is the price difference justified? Are there other issues that could explain the price difference in terms of quality?
Capsules are considered more advanced drug packaging because none of the drug dissolves in your mouth when you swallow capsules. Unlike pills, 100% of the drug goes straight to your stomach, so there's no variation in the drug dosage, and the patient won't complain if the drug is bitter.
Also because you can open the capsule and pour it into the glass of water, if you have trouble swallowing pills. Which defeats the first advantage, and you can simply order powdered drug instead of pills, but it won't come pre-packaged as pills so it will be more expensive.
None of that matters for vitamins, you generally need more than 1000% of the daily dose for it to become harmful, so each pill contains more than your body really needs, because there are no side effects, so you can buy a pill and lick it, chew it, crush it, and add it to your coffee, and it will still work just fine.
and lick it, chew it, crush it,
Boil it, mash it, stick it in a stew
But it's all herbal and natural!
dried plants are no less effective in making you vomit than synthetic drugs
Rule #1 in economy is that the supplier would like to sell at a price as high as possible. People find gel capsules to be more effective, and are willing to sell that much more for them. This is what is setting the price, not the actual performance.
I'm not saying that the cost is not justified. Only, that the question of the production cost is not relevant.
I'm jaded as fuck but I imagine it is all bullshit and you are being ripped off.
Same reason why a box of washing detergent powder costs a fraction of the price of washing gel pods and lasts about ten times longer and works just as well if not better.
You are being sold bullshit.
Not to sound patronizing, but I think it’s great you plan on talking to the doctor! Is there not an electronic messaging system or phone number to call before the appointment?
I apologize for being unclear- it’s multivitamins that haven’t been shown to be beneficial. Some folks report that they “feel healthier” even though their actual measurements and instances of disease aren’t any different.
Single vitamins make complete sense when you have a deficiency. I don’t think capsule vs gel is the most important factor. I’d see if there are any articles where experts (doctors, pharmacists) are consulted that compare brands (i.e not a random blog). I was able to find an article for US brands of ranked by pharmacists pretty easily
Idk about vitamins, it seems to be a bit contrived.
My ADHD stimulants come in hard pills and capsules. Capsules are long absorbtion, they release the drug more slowly in the digestive system. The hard pills are a short burst, usually with lower doses.
It makes a ton of difference to me, but I just eat vitamins as hard pills. Some are difficult to swallow, but I can deal. Some can't, and capsules are likely better.
Here is ROME 24.07, the rolling release model up-to-date install images. It basically carries all the features already included in the Release Candidate with the packages updated to the latest…OpenMandriva
Can someone ELI5 what OpenMandriva is?
In what place does it stand in contrast to Fedora, OpenSUS, and all the Enterprise Linux forks?
This is the official OpenMandriva website. There you can find documentation, all needed links for our services and getting the distribution, and the latest news.OpenMandriva
How do the 'offspring' of Mandrake/Mandriva compare to one another? IIRC, there's ALT, Mageia, OpenMandriva, PCLinuxOS and ROSA.
I've also come to the understanding that what set Mandrake apart from its peers was its polish and user-friendliness. Which, harbored a great community back in the days. Currently, however, this role is fulfilled by distros like Linux Mint. Furthermore, most distros are relatively straightforward anyways. So, my other questions would be:
- Could the argument be made that Linux Mint is the actual spiritual successor to Mandrake?
- Are the Mandrake-offspring's most compelling raison d'être that they're Mandrake's offspring?
I think Ubuntu was and probably continues to be the real “spiritual successor” just because it is still widely used and is still very polished and user friendly as long as you want to keep with their experience. However, to really compare the “ease of use” (hand holding?) vs contemporaries of Mandrake, Elementary or Zorin might fit the role. They are simplified compared to even Ubuntu, Mint, or Pop OS.
All of the distros have gotten so much easier than they were at the time though. When X got autoconfiguration rather than a distro installer trying to guess and generate a config it was a huge game changer from the way it was before (the days when Debian warned about destroying your monitor). In some ways I think this was one of the largest ease of use changes we’ve had. The other stuff just got better.
I think they are mostly compelling to people who like the nostalgia and were fans of Mandrake. Mandrake was hugely popular in its time. I somewhat doubt they are getting a ton of new converts, not that there’s anything wrong with that.
Very pessimistic. Besides the current problems like wars and Trump becoming the next president of the USA (which as a European citizen really scares me), climate change is going to fuck over human scociety big time in my life time. Well, it already is but still humanity as a whole is doing jack shit about it. Giant oil companies keep digging for new oil and gas, the best selling cars are unnecessarily huge SUV's, planes are still being subsidized rather than trains, humanity keeps eating meat, plastic usage and production is barely going down.
The current problems the news is full about don't really matter in the long run when we're literally making our planet unliveable and humanity is clearly still denying it.
There was four years of Trump and nothing particularly bad happened.
In fact, global instability has been markedly worse in the four years since.
which president's administration directly attacked Europe with the Nord Stream II bombing?
There was four years of Trump and nothing particularly bad happened.
He didn't really have a plan for his first term. That's why he only was able to do a few bad things. This time around there is a plan.
I wouldn't classify Project 2025 as "more standard American bullshit".
It's basically a guide on how to turn the USA into a fully fascist country within one presidential term.
There was four years of Trump and nothing particularly bad happened.
Except that time a million and a half people died and literally the whole country had to stay inside collecting unemployment and washing our groceries while all of his followers got super amped up and violent because they weren’t (always) being allowed to make things worse
And that little bonus surprise at the end and how a sizable portion of the country including some important judges hates elections and anyone who makes them happen now
I mean there’s more but those are good starters
actually, it would seem that the nordstream attack was a bipartisan effort, with the plan already having gotten support under Trump:
interviewer: what's your relation like with vladimir putin?
donald trump: i think it's very good, but i was tough with him. i ended the pipeline. it was called nordstream 2.
Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.www.youtube.com
According to the GIEC (IPCC) report if nothing changes, and nothing is changing as you remarkably said, the fall of our society will start in 2040 because of food shortages due to the climate.
2000 fucking 40! It’s tomorrow. I am destroyed by this future and really don’t understand corporate and/or politicians.
I still have friends making babies and not think that their life will be miserable in less than 20 years.
I think that’s the one yes. Decreased food yield from 2040 to 2099 onwards.
Also, even if I’m not into that, an old fart like Nostradamus or something like that (I don’t remember his name) wrote at the time that humanity will be greatly reduced around the first part of the 21th century. And now scientific studies more or less agree with that.
I have hope that humanity will change, I have zero hope that the ones who can do things (industries, huge corporations, rich peoples, politicians) will do something.
Apparently it’s better to die seated on unused billions rather than having a living world for your kids.
I feel you. I want children but agreed with myself long ago that they will be adopted because I don't want to bring children onto this dying planet.
My country (the Netherlands) is going to be majorly flooded within the next 100 years (but probably sooner) but the majority of buildings built to stop the housing crisis are still build under sea level in the major cities.
People think they're not climate change deniers but 95% of them most definitely are.
Both. I do believe that "communism will win" as an inevitability (with one big caveat, see below). Capitalism obviously is unsustainable and rife with internal contradictions that can only lead to its eventual demise. The obvious and broad example being that it requires infinite growth on a finite planet. But I think it can get very bad before it gets better, and expect it will further devolve into fascism (much more so than it already has) for most if not all of the western world, and the entire world will suffer as a result. Socialism, then communism will eventually emerge (since fascism is just as doomed by its contradictions as capitalism is), but before we get there, I expect there is going to be some truly unimaginably dark and horrible times on the way there. So in that sense, I am ultimately optimistic about the future of the world, but extremely pessimistic about its more immediate future.
But now for the caveat. I think that most people, even leftists, don't fully appreciate how much climate change is going to reshape the world. There is a real chance that it will get bad enough that civilization may not survive, that humanity as a species will be among the many that don't make it through the mass extinction we've only just entered. Even people fully on board with knowing climate change is bad and must be curtailed as much as possible as soon as possible still mostly don't realize how much a genuine existential threat it is on a planetary scale, on a scale of centuries and longer. It is by no means a certainty, but given the feedback loops we don't fully understand and definitely don't know how to interrupt, there is a possibility of Earth even going the way of Venus. Obviously I hope that's not the case, but it would be a mistake not to recognize the extreme potential of climate change. If we are able to mitigate it in time, I am like I said, ultimately optimistic. But I am beyond afraid that we won't be able to mitigate it in time.
In other words, it's not just "socialism or barbarism," it's socialism or annihilation.
Unscientific take on climate change, IMO
What I've read from scientists/experts doesn't paint that picture at all.
Catastrophic weather events will kill millions, but not a billion.
you: "That's unscientific"
get shown that it is in fact scientific
you again: "I disagree."
You don't seem to understand how science or reality works.
Realistic. The world will be fine, not so much for most of the lifeforms that currently populate it. Earth has gone through evolutionary resets before. The existence of deep sea hydrothermal vents with methane loving organisms means that Earth will most likely NOT become like Venus, even when all feedback loops run their course. The carbon released and washed into the oceans will feed the species that produce oxygen, just as it has before.
Humans have become their own asteroid.
Well... Earth becoming more like Venus is an inevitability in the high hundreds of millions of years (and for scale, multicellular life has been around for roughly 5-600M years, with the more than 3 billion before that just being simple single-celled prokaryotic life), but that is completely independent of anthropogenic climate change, it is because of the expansion of the sun towards it's red giant phase. In terms of being habitable to life, Earth is easily past the half-way mark already, no matter what. However, that is far enough out that it doesn't bear worrying about and isn't something we can have any sort of impact on.
That said, the climate change that we as a species are causing right now could lead to a runaway greenhouse effect on much shorter time scales. The fact is, there have been times in Earth's history where it has been so hot that complex life could mostly only survive at the poles (with the equator being a death zone to all but simple, single-cellular extremophiles) and there have been times where Earth was encased almost entirely in ice except perhaps at the equators - not just our usual conception of an ice age, but "snowball earth," and this was likely caused by certain forms of simple life, fascinatingly enough. The feedback loops we are triggering right now have a potential to drastically change the composition of the atmosphere on a far shorter timescale, one in which we are talking about an end to most complex life (obviously ourselves included). It was almost certainly volcanism that caused Venus to go from a mostly habitable planet to the completely, utterly inhospitable world it is. Volcanism has also been responsible for extreme heat and mass extinctions on Earth, but obviously it never tipped over into Venus-like territory. The thing is, right now we're changing the atmosphere at a rate far faster than volcanism has in the past! And rate of change matters a lot with this kind of thing. I'm repeating myself, but again, it is not a certainty but it is a possibility that anthropogenic climate change could hit tipping points that Venus-ifies Earth on a much shorter, nearer term than anything relating to the expansion of the sun, on time scales that are worth worrying about (if we value humanity as a whole), and is the sort of thing we can have an impact on.
Pretty pessimistic, short term (the next decade or two.)
I am optimistic about socialism and anarchism very long term (the next 100-1000 years,) maybe just because I try to remain hopeful on stuff like this.
Humanity has created some incredible things, we have so much potential to be a boon for the planet, for the animals, and for each other. But we must oust Capitalism, corporatism, statism, and fascism to even have a chance.
I think that the first 20 to 30 years will be very difficult for humanity. There is a distinct reactionionary movement that is blocking or even reversing progress needed to fix various problems (including, but not limited to climate change, destruction of ecosystems, housing problems and the world population aging beyond sustainablitiy). It will get very messy.
After the boomer generation has died out as well as my own (GenX-er here), humanity can hopefully look forward again. As I age, I really think that it is our two generations that are blocking progress. As millennials and Gen Z ages, they will hopefully learn from us how not to do things.
Optimistic. Even if we all die, consciousness can evolve again. Even if we end up tortured for all eternity by robots, after a few million years we’ll be godlike and able to transcend that reality. Those who would be able to create it recognize this fact, and so there’s a mechanism by which the most intelligent are most aware of the inescapability of karma.
Given the perfection of the information ecosystem in its most abstract form, I predict increased pleasure and joy and decreased betrayal and suffering as time goes on.
Might be right in the near term, but everything will be okay.
I assume you mean in the long term, like looking centuries or more out. If the scope is different my answer might be different.
It's a total unknown to me. The biggest question is where the Enlightenment came from after millennia of producing the same system over and over again, and if it's here to stay. MAD and GAI are the other two big existential threats. Other problems can all be recovered from in the long arc of history. (Yes, even climate change)
Sorry to be a bother, but I'm hitting a wall here, and my google-fu is not strong enough apparently. I'm trying to reinstall a Home Assistant VM on my server, (It's been a while, and I have no idea how I did it originally).
Running:virt-install --name haos --description "Home Assistant OS" --os-variant=generic --ram=16384 --vcpus=4 --disk /home/chris/haos_ova-12.4.qcow2,bus=scsi --controller type=scsi,model=virtio-scsi --import --graphics none --boot uefi
Returns:
`WARNING KVM acceleration not available, using 'qemu'
WARNING Using --osinfo generic, VM performance may suffer. Specify an accurate OS for optimal results.
Starting install...
ERROR internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/home/chris/.cache/libvirt/qemu/log/haos-swtpm.log' for details.
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
virsh --connect qemu:///session start haos
otherwise, please restart your installation.`
Checking the 'haos-swtpm.log' shows repeated entries of:
Starting vTPM manufacturing as chris:chris @ Sun 21 Jul 2024 02:58:07 AM UTC Successfully created RSA 2048 EK with handle 0x81010001. Invoking /usr/lib/x86_64-linux-gnu/swtpm/swtpm-localca --type ek --ek af44f41c741b89d0a45748c4bb34d21457da950586715133274c649c7a84dd7dffcbd1b53f2f56f7b24a00529e92db82e30b60a759672531a3c5faea54a71fb8df433f9034bfad37d7561fd187c9562024322d6a7ab41e1af26b0cbe67a66869b9f779eef408f27e14f97d365be47921612e8d9ca010dfdd9ab08c3a321b795b3b2809f1bd132b57eb6408569c38f7558eda65e1787c4d4b077794b249c87fa5f275cf8bc8bbce41467448b4ee9648da06a84a0c03378416f1a5dec7c5317e5f0883ca515e207fce70495f144148d18ac34def0e2415d3e82fcfe9224848b7ccfe35143207b0f1fce4293cd9cd1c11daa3d45463b0c17ad7d988438c52aa631f --dir /home/chris/.config/libvirt/qemu/swtpm/d7406119-26ab-4e42-b98e-46065e1ea2eb/tpm2 --logfile /home/chris/.cache/libvirt/qemu/log/haos-swtpm.log --vmid haos:d7406119-26ab-4e42-b98e-46065e1ea2eb --tpm-spec-family 2.0 --tpm-spec-level 0 --tpm-spec-revision 164 --tpm-manufacturer id:00001014 --tpm-model swtpm --tpm-version id:20191023 --tpm2 --configfile /etc/swtpm-localca.conf --optsfile /etc/swtpm-localca.options Need read/write rights on statedir /var/lib/swtpm-localca for user chris. swtpm-localca exit with status 1: An error occurred. Authoring the TPM state failed. Ending vTPM manufacturing @ Sun 21 Jul 2024 02:58:07 AM UTC
I gather that it seems to be a issue with the vTPM, but I usually deal in containers, so this is all new on me.
Thanks in advance.
What motherboard are you running?
Also, are you sure your user has the right permissions to access libvirt assets? Do you get the same error if you run as sudo?
WARNING KVM acceleration not available, using 'qemu'
That's related to hardware virtualisation, like the other person said, check that it's enabled.
WARNING Using --osinfo generic, VM performance may suffer. Specify an accurate OS for optimal results.
This is related to --os-variant=generic
, I don't remember what Home Assistant OS is based off, but find out and pick an option from virt-install --os-variant list
, otherwise use linux2022
.
ERROR internal error: Could not run '/usr/bin/swtpm_setup'.
I'm not sure why it's attaching a TPM, but I believe --tpm clearxml=true
should remove it.
What are you reinstalling? New haos on old kvm? Old haos on new kvm? New deployment?
From the logs I read that your user chris
has no rw perms within the haos.
Need read/write rights on statedir /var/lib/swtpm-localca for user chris.
This indicates a permission issue. Sounds like your user account isn't part of whatever group swtpm needs it to be. Check the permissions on that directory and maybe add your user account to that group.
The other error messages are warning you that you don't have kvm support, which will make your VM terribly slow. Could also be a permission issue, could be a UEFI setting. It's probably worth fixing more than the TPM issue; you can probably just remove the TPM from the hardware config to get it to boot because I doubt you'll strictly need a TPM to use HAOS.
I don't think that we're in a simulation, but I do find myself occasionally entertaining the idea of it.
I think it would be kinda funny, because I have seen so much ridiculous shit in my life, that the idea that all those ridiculous things were simulated inside a computer or that maybe an external player did those things that I witnessed, is just too weird and funny at the same time lol.
Also, I play Civilizations VI and I occasionally wonder 'What if those settlers / soldiers / units / whatever are actually conscious. What if those lines of code actually think that they're alive?'. In that case, they are in a simulation. The same could apply to other life simulators, such as the Sims 4.
Idk, what does Lemmy think about it?
It is incredibly unlikely.
I know, "if an ancestor simulation is possible than it is much more likely you're in one than not in one." That's fallacious, unfalsifiable and everyone loves to leave out the word "ancestor" which is very important to the thought experiment.
In our universe, no system is entirely isolated from the rest of it. It is impossible to create a system that does not in some way interact with the outside universe. So if it is a simulation in a universe, and the universe it is running in also has this rule we would see information from that universe leak into ours in some way. How that would appear we don't know, but it would be possible to figure it out. Maybe heat dissipates out, maybe bit flips happen in our universe due to the parent's equivalent to cosmic rays, maybe the speed of light is a result of the clock speed of the simulator. We don't know what it would be, but there would be something, and it would be theoretically discernible.
at least some of the laws of our universe are laws of the parent universe. So maybe that rule, no system exists in isolation, is also true above. Or maybe our speed of light is the same for them. Whatever it is, our cumulative constraints are more than that of the simulation.
All that, unless, in the parent universe, 1) systems can exist in isolation, or 2) it is an environment with no constraints. These two are functionally equivalent, so I'll talk about them like they're the same thing. In such a universe, there would be no causality, no form, nothing that makes it unified. It's not a universe at all. It's something like a universe post heat death. In such a scenario, running a simulation isn't possible. If it were, to create an environment in which causality can be simulated, that environment wouldn't be a simulation, it would be a bona fide universe.
So I think, the fact that we see no evidence that we are in a simulation means we are probably not in one. So that means, if we are in one it is falsifiable and we can prove or disprove it empirically. And it also means we can escape, or at the very least destroy it.
Sure, but I don't think that's what's going on there.
I think observation/measurement of a quantum system means entangling with the system, so the quantum system becomes larger and includes the observer. Combine that with relativity, which is absolute in the universe, and you have an e plantation for that phenomenon.
That wouldn't explain why the two results end up not agreeing sometimes.
I agree that it relates to how the observer entangles with the system, but you see this kind of error class occurring in net code all the time.
Player 1 shoots an enemy around the same time as player 2. Player 1 has a locally rendered resolution to the outcome of having killed the enemy and gets awarded the xp, and player 2 has the same result.
The server has to decide if it is going to let both local clients be correct or resolve in a way that reverses the outcome for one of the clients. For things that don't really matter, it lets both be correct.
Here, each individual outcome is basically Bell's paradox, where we know there needs to be consistent results no matter how each observer behaves. But in this case, when a second layer of abstraction is added, the results are capable of disagreeing.
It looks very similar to a sync error, and relativity doesn't in any way explain it.
Relativity only relates to the relative shape of spacetime and movement through it.
So for example, things occurring faster for one inertial frame vs another, or something being closer to an observer moving quickly than for one stationary.
It's exclusive to the combination of spacetime curvature and one's momentum within it.
How do you think relativity does explain it?
You should look into contextual realism. You might find it interesting. It is a philosophical school from the philosopher Jocelyn Benoist that basically argues that the best way to solve most of the major philosophical problems and paradoxes (i.e. mind-body problem) is to presume the natural world is context variant all the way down, i.e. there simply is no reality independent of specifying some sort of context under which it is described (kind of like a reference frame).
The physicist Francois-Igor Pris points out that if you apply this thinking to quantum mechanics, then the confusion around interpreting it entirely disappears, because the wave function clearly just becomes a way of accounting for the context under which an observer is observing a system, and that value definiteness is just a context variant property, i.e. two people occupying two different contexts will not always describe the system as having the same definite values, but may describe some as indefinite which the other person describes as definite.
"Observation" is just an interaction, and by interacting with a system you are by definition changing your context, and thus you have to change your accounting for your context (i.e. the wave function) in order to make future predictions. Updating the wave function then just becomes like taring a scale and isn't "collapsing" anything physical. There is no observer-dependence in the sense that observers are somehow fundamental to nature, only that systems depend upon context and so naturally as an observer describing a system you have to take this into account.
Quantum mechanics and relativity are, at least currently, incompatible theories. Relativity depends on continuous things, which is why it has singularities and what not. But quantum mechanics has minimum discrete units that don't play nice with gravity and relativity.
Also, it's still an open debate as to whether quantum mechanics is applicable to all sizes of things. There's some consequences around that being the case and it's one of the suggestions for an assumption resolving recent paradoxes around incompatibilities between the theory and our expectations for behaviors. If it does apply to larger objects, the consequences are basically that either there's no free will and superderminism is true or else that quanta don't actually exist until observed.
In fact, currently we haven't been able to observe quantum behavior in anything large enough to measure gravitational effects from. Which may be where a fundamental limit exists, given the incompatibility between relativity and QM.
So as we all know on the news, the cybersecurity firm Crowdstrike Y2K'd it's own end customers with a shoddy non-tested update.
But how does this happen? Aren't there programming teams and check their code or pass it to a quality assurance staff to see if it bricked their own machines?
8.5 Million machines too, does that effect home users too or is it only for windows machines that have this endpoint agent installed?
Lastly, why would large firms and government institutions such as railway networks and hospitals put all their eggs in one basket? Surely chucking everything into "The Cloud (Literally just another man's tinbox)" would be disastrous?
TLDR - Confused how this titanic tits up could happen and that 8.5 Million windows machines (POS, Desktops and servers) just packed up.
That is the risks of DevOps continuous improvement/continuous development (ci/cd) . why break things one at a time, when you can break them in the millions at once.
I fully expect to see CS increase their QA for the next year or 2 then slowly dwindle it back to pre fuckup levels when their share price recovers.
8.5 Million machines too, does that effect home users too or is it only for windows machines that have this endpoint agent installed?
This software mandated by cyber insurance companies to 'keep your business secure' aka, your staff broke policy so we don't have to pay this out claim.
No home user should ever run something like this at all. This is entirely a corporate thing.
But how does this happen? Aren't there programming teams and check their code or pass it to a quality assurance staff to see if it bricked their own machines?
I mean - we're all just people. Fuck ups happen because people check other people's work - even with excellent systems in place shit will slip through... we just try to minimize how often that happens.
Lastly, why would large firms and government institutions such as railway networks and hospitals put all their eggs in one basket? Surely chucking everything into "The Cloud (Literally just another man's tinbox)" would be disastrous?
Because they are best in class. No one else does EDR like Crowdstrike does. Can you imagine the IT support headaches if you had 200,000 PCs and servers some running one EDR and others running a different one. The amount of edge cases you would come across is ridiculous.
It would make data correlation a nightmare if an actual security incident occured.
Well obviously that's about to change, and some of the core product its still fantastic, but their (presumably) greed and process handling around how they deliver changes has failed here.
The product is still good, hopefully they can mature
But how does this happen?
It's destined to happen, according to Normal Accident Theory.
Aren’t there programming teams and check their code or pass it to a quality assurance staff to see if it bricked their own machines?
Yes, there are probably a gigantic number of tests, reviews, validation processes, checkpoints, sign-offs, approvals, and release processes. The dizzying number of technical components and byzantine web of organizational processes was probably a major factor in how this came to pass.
Their solution will surely be to add more stage-gates, roles, teams, and processes.
As Tim Harford puts it at the end of this episode about "normal accidents"... "I'm not sure Galileo would agree."
Galileo tried to teach us that when we add more and more layers to a system intended to avert disaster, those layers of complexity may eventually be what causes the catastrophe. His basic lesson ha…Tim Harford
This is actually an excellent question.
And for all the discussions on the topic in the last 24h, the answer is: until a postmortem is published, we don't actually know.
There are a lot of possible explanations for the observed events. Of course, one simple and very easy to believe explanation would be that the software quality processes and reliability engineering at CrowdStrike are simply below industry standards -- if we're going to be speculating for entertainment purposes, you can in fact imagine them to be as comically bad as you please, no one can stop you.
But as a general rule of thumb, I'd be leery of simple and easy to believe explanations. Of all the (non-CrowdStrike!) headline-making Internet infrastructure outages I've been personally privy to, and that were speculated about on such places as Reddit or Lemmy, not one of the commenter speculations came close to the actual, and often fantastically complex chain of events involved in the outage. (Which, for mysterious reasons, did not seem to keep the commenters from speaking with unwavering confidence.)
Regarding testing: testing buys you a certain necessary degree of confidence in the robustness of the software. But this degree of confidence will never be 100%, because in all sufficiently complex systems there will be unknown unknowns. Even if your test coverage is 100% -- every single instruction of the code is exercised by at least one test -- you can't be certain that every test accurately models the production environments that the software will be encountering. Furthermore, even exercising every single instruction is not sufficient protection on its own: the code might for instance fail in rare circumstances not covered by the test's inputs.
For these reasons, one common best practice is to assume that the software will sooner or later ship with an undetected fault, and to therefore only deploy updates -- both of software and of configuration data -- in a staggered manner. The process looks something like this: a small subset of endpoints are selected for the update, the update is left to run in these endpoints for a certain amount of time, and the selected endpoints' metrics are then assessed for unexpected behavior. Then you repeat this process for a larger subset of endpoints, and so on until the update has been deployed globally. The early subsets are sometimes called "canary", as in the expression "canary in a coal mine".
Why such a staggered deployment did not appear to occur in the CrowdStrike outage is the unanswered question I'm most curious about. But, to give you an idea of the sort of stuff that may happen in general, here is a selection of plausible scenarios, some of which have been known to occur in the wild in some shape or form:
- The update is considered low-risk (for instance, it's a minor configuration change without any code change) and there's an imperious reason to expedite the deployment, for instance if it addresses a zero-day vulnerability under active exploitation by adversaries.
- The update activates a feature that an important customer wants now, the customer phoned a VP to express such, and the VP then asks the engineers, arbitrarily loudly, to expedite the deployment.
- The staggered deployment did in fact occur, but the issue takes the form of what is colloquially called a time bomb, where it is only triggered
later on by a change in the state of production environments, such as, typically, the passage of time. Time bomb issues are the nightmare of reliability engineers, and difficult to defend against. They are also, thankfully, fairly rare.
- A chain of events resulting in a misconfiguration where all the endpoints, instead of only those selected as canaries, pull the update.
- Reliabilty engineering not being up to industry standards.
Of course, not all of the above fit the currently known (or, really, believed-known) details of the CrowdStrike outage. It is, in fact, unlikely that the chain of events that resulted in the CrowdStrike outage will be found in a random comment on Reddit or Lemmy. But hopefully this sheds a small amount of light on your excellent question.
I want to clarify something that you hinted at in your post but I've seen in other posts too. This isn't a cloud failure or remotely related to it, but a facet of a company's security software suite causing crippling issues.
I apologize ahead of time, when I started typing this I didn't think it would be this long. This is pretty important to me and I feel like this can help clarify a lot of misinformation about how IT and software works in an enterprise.
Crowdstrike is an EDR, or Endpoint Detection and Response software. Basically a fancy antivirus that isn't file signature based but action monitoring based. Like all AVs, it receives regular definition updates around once an hour to anticipate possible threat actors using zero-day exploits. This is the part that failed, the hourly update channel pushed a bad update. Some computers escaped unscathed because they checked in either right before the bad update was pushed or right after it was pulled.
Another facet of AVs is how they work depends on monitoring every part of a computer. This requires specific drivers to integrate into the core OS, which were updated to accompany the definition update. Anything that integrates that closely can cause issues if it isn't made right.
Before this incident, Crowdstrike was regarded as the best in its class of EDR software. This isn't something companies would swap to willy nilly just because they feel like it. The scale of implementing a new security software for all systems in an org is a huge undertaking, one that I've been a part of several times. It sucks to not only rip out the old software but also integrate the new software and make sure it doesn't mess up other parts of the server. Basically companies wouldn't use CS unless they are too lazy to change away, or they think it's really that good.
EDR software plays a huge role in securing a company's systems. Companies need this tech for security but also because they risk failing critical audits or can't qualify for cybersecurity insurance. Any similar software could have issues - Cylance, Palo Alto Cortex XDR, Trend Micro are all very strong players in the field too and are just as prone to having issues.
And it's not just the EDR software that could cause issues, but lots of other tech. Anything that does regular definition or software updating can't or shouldn't be monitored because of the frequency or urgency of each update would be impractical to filter by an enterprise. Firewalls come to mind, but there could be a lot of systems at risk of failing due to a bad update. Of course, it should fall on the enterprise to provide the manpower to do this, but this is highly unlikely when most IT teams are already skeleton crews and subject to heavy budget cuts.
So with all that, you might ask "how is this mitigated?" It's a very good question. The most obvious solution "don't use one software on all systems" is more complicated and expensive than you think. Imagine bug testing your software for two separate web servers - one uses Crowdstrike, Tenable, Apache, Python, and Node.js, and the other uses TrendMicro, Qualys, nginx, PHP, and Rust. The amount of time wasted on replicating behavior would be astronomical, not to mention unlikely to have feature parity. At what point do you define the line of "having redundant tech stacks" to be too burdensome? That's the risk a lot of companies take on when choosing a vendor.
On a more relatable scale, imagine you work at a company and desktop email clients are the most important part of your job. One half of the team uses Microsoft Office and the other half uses Mozilla Thunderbird. Neither software has feature parity with the other, and one will naturally be superior over the other. But because the org is afraid of everyone getting locked out of emails, you happen to be using "the bad" software. Not a very good experience for your team, even if it is overall more reliable.
A better solution is improved BCDR (business continuity disaster recovery) processes, most notably backup and restore testing. For my personal role in this incident, I only have a handful of servers affected by this crisis for which I am very grateful. I was able to recover 6 out of 7 affected servers, but the last is proving to be a little trickier. The best solution would be to restore this server to a former state and continue on, but in my haste to set up the env, I neglected to configure snapshotting and other backup processes. It won't be the end of the world to recreate this server, but this could be even worse if this server had any critical software on it. I do plan on using this event to review all systems I have a hand in to assess redundancy in each facet - cloud, region, network, instance, and software level.
Laptops are trickier to fix because of how distributed they are by nature. However, they can still be improved by having regular backups taken of a user's files and testing that Bitlocker is properly configured and curated.
All that said, I'm far from an expert on this, just an IT admin trying to do what I can with company resources. Here's hoping Crowdstrike and other companies greatly improve their QA testing, and IT departments finally get the tooling approved to improve their backup and recovery strategies.
Fantastic write up. I'd just add something to this bit:
Basically companies wouldn’t use CS unless they are too lazy to change away, or they think it’s really that good.
I work in Cyber Security for a large organization (30,000+ end points). We're considering moving to CrowdStrike. Even after this cock-up, we're still considering moving to CS. I've had direct experience with several different A/V and EDR products, and every single one of them has had a bad update cause systems to BSOD. The reason this one hit so hard is that CS is one of the major EDR/XDR vendors. But ya, it's generally considered that good. Maybe some folks will move away after this. And maybe another product is nipping at their heels and will overtake them in the near future. But, for now, it's not surprising that it was everywhere for this situation to get really FUBAR.
Could a solution to this be any of the following:
Basically the second one is standard practice, a phased rollout. The only reason you wouldn't do one is if there's some really bad exploit that is currently being exploited and you need to fix it now now now. So either somebody fucked up and deployed a regular fucked update as a critical patch, or a critical patch was shoddily made and ended up soft bricking everyone.
But idk i don't work in tech.
A large percentage of threads I've created or participated in have been deleted. Worse is that when visiting the URL everything is completely gone.
This is much more drastic when compared with Reddit thread deletions, where the thread is there and so is the discussion. And the creator of the thread has access to their content.
The Lemmy method discourages people from participating in threads and creating high-quality content, much more so than the Reddit method.
A bunch of lively and useful discussions on Lemmy have completely disappeared. And it makes it seem like a waste of time to even contribute content here.
EDIT: I see that the "fediverse" link for posts has been removed. I posted this to lemmy.ml from a lemmy.world account and there's no way for me to get the lemmy.ml link now. And when I crosspost it it shows a lemmy.world link instead of the lemmy.ml one. I think this should be changed [back].
EDIT: I see that the “fediverse” link for posts has been removed.
It's still there, just not when the post/comment comes from the instance you're on. Even though the post is to a lemmy.ml community, it's from lemmy.world so that's where the fediverse link goes to.
Useful info from an admin about why one of the threads went missing: https://lemmy.world/comment/11288468
There seems to be room for improvement in how Lemmy handles this.
Hello all,
A couple of questions about Cantata.
It hasn't been developed for a while now. Will it eventually just stop working or there's any way to at least port it to qt6, without new features, just keeping the basic functionality?
I have the habit of star most of the songs I listen to, so that it's become easier to, for example, create smart playlists. In the case I can use Cantata no more is it possible to export somehow this data? Let's say that it stops working and I switch to something else (say, Navidrome or some equivalent). Will I be able to export my preferences, stars, playlists, favorites, etc?
Cantata is an excellent graphical frontend to mpd.
And another Cantata fellow fan posted a fork of the the original project that apparently has been maintained.
Qt5 Graphical MPD Client. Contribute to CDrummond/cantata development by creating an account on GitHub.GitHub
Big fan of Cantata here 😀
It was forked this year and the new developer kept using the original name.
Qt Graphical MPD Client. Contribute to nullobsi/cantata development by creating an account on GitHub.GitHub
Hey, that's great news!
And to think I've been searching so much about Cantata these last few days and didn't find this github project... although it seems to be the fork of a fork, isn't it?
I do appreciate your input.
How much finance is needed and what's the procedure to do so (not medical but how to approach the professionals)? Is Switzerland the only place where accepts foreigners and did they have any successful cases who used any inherited neurobiological disorder as a reason?
Thanks for your help and I would like to know more about euthanasia too. Have a nice day.
So with Exit, you need to have been diagnosed by a psychiatrist. So it probably takes some time, and that time has to be obviously paid. After that, a second psychiatrist checks the diagnosis for any errors. And the third actor is going to be a regular doctor, double checking for errors. If all goes well, you get the death cocktail.
As things can go wrong with that, you should be accompanied by someone experienced. Exit provides this assistance free of charge, as they use donations and membership fees to pay for that.
I cannot tell you about what diseases are successful. Usually, psychological issues are not enough to get the diagnosis you need. You need to be heavily impacted by it and there has to be no cure.
Sounds painful with a high possibility of breaking bones then drowning while conscious. I'd recommend an inert gas that's not CO2 and readily available, like nitrogen or something (CO2 buildup in the blood is what gives the sensation of suffocation). If you're worried about people finding you and a mess, get an enema and stay a bit dehydrated first, and also ensure your body's found within the first 2-3 days if possible (the first thing your corpse does is shit itself, and rot sets in pretty quickly).
This of course presumes you're making the decision to end yourself while of sound mind, not in some panic, feeling trapped or completely hopeless. There's usually a way out that's not as permanent and can lead to future positive interactions that make continued living worth the pain. That said, I'll never judge someone whose pain outweighs their will to live.
A lot of people seeking euthanasia are in a very weak physical condition, in which getting by themselves and jumping from said bridge would be a feat. This includes people who are bedridden or who have temporary memory blackouts, you have people who would if only they were allowed to leave hospital or if their families would leave them alone to do so.
People seeking this usually aren't healthy, independent and self reliant. Those already jump the bridge.
Correct. Essentially, you are not living anymore but are forced to by law and the desires of those around you who are more focused on their wants than your's. It's inhumane to ignore a plea for mercy, and yet that's what people do.
Overdosing is one of the easier options, but assisted suicide always requires assistance.
It is not the state. Those are regular professionals.
I partially agree that there should be a possibility to kill yourself in an easy way. But don’t forget that very often, this death wish is the cause of a mental illness, which can be treated.
Washington state and New Jersey specifically allow certain types of euthanasia, but I'm not sure how illegal it is -- or any 'suicide' is -- in different places. Is euthanasia a crime in your state? Is (attempted) suicide?
Murdering someone **else ** is a crime, so it is nice to have laws specifying how a person can legally help someone without being charged with murder.
The U.S. has historically not 'punished' suicide as much more than a misdemeanor, if at all. From PDF paper from 1962:
As stated by a leading authority on criminal law^30^When a man is in the act of taking his own life there
seems to be little advantage in having the law say to him:
"You will be punished if you fail." ... What is done to him
will not tend to deter others because those bent on self-
destruction do not expect to be unsuccessful. It is doubtful
whether anything is gained by treating such conduct as a
crime
England, on the other hand, was very hostile to suicide until it was decriminalized in 1961 (paper is too old to mention current status):
A person who committed suicide was punished at common law by burial in the public highway with a stake driven through his body and by forfeiture of his goods and chattels to the king.' Attempted suicide was apparently punished like any other misdemeanor.
End of Life Washington - formerly Compassion & Choices of Washington - guides people in planning for the final days of their lives.End of Life Washington
those bent on self-destruction do not expect to be unsuccessful
Expectations of success are actually part of what get people there in the first place
forfeiture of his goods and chattels to the king.
Oh, well there's a huge surprise... 🙄
I believe Australia recently approved a law which allows for assisted dying, however, I'm not 100% sure if that's the case and if so, whether foreigners are allowed or in which state does it work.
My recommendation would be to search online for an End Of Life Companion or a Death Doula. They will be able to provide you with the information you seek, free, and if you wish, they can also assist you in person for a fee.
Good luck OP
I have no experience with this, but happened to have seen an interview with Ludwig Minelli, the founder of Dignitas (an organisation for assisted death). The man is 90+ and still fighting for this right. I believe I saw it in a video format, but I think this was the interview - I think it's worth a read.
I'd suggest you look up the contact for the various organisations and reach out with your situation and questions to see what they say. They're likely to be much better sources of information.
I want to mirgrate my Nextcloud instance from a VPS to server in my home. I run the Nextcloud AIO Docker container, which uses Borg backup. The backup repo is about ~70 GB.
How would I best go about transferring it? Is using scp a good solution here (in combination with nohup so that I don't have to keep my ssh session active)? Or is there some other best practice way of doing this?
I'm running Proxmox on a Lenovo ThinkCentre and I decided to swap the internal 256 GB 2,5" SSD for a 500 GB NVME.
I installed the new NVME alongside the old SSD, and formated it in ext4 with a single partition. I then proceeded to 'dd /dev/sda /dev/nvme0n1' and it went through without an error.
My impression was that it would clone all content from the old drive to the new, but it wouldn't boot the new drive. I then logged in and set a boot flag via fstab a, but that only helped me boot but the system gets stuck at "waiting for root file system".
Nothing is lost as the old drive still works fine when installed but how do I complete the swap correctly so I can go NVME-only?
Thank you!
is your fstab using uuid's?
did you recreate your ramdisk and bootloader?
No UUIDs, only paths. When booting from a live system I noticed /etc/fstab is empty. Also, there where a bunch of partitions on /dev/sda and I can't see any on /dev/nvme0n1
No not consciously
it sounds like the bootloader is installed; but not updated to point to the ramdisk and i'd be surprised if your ramdisk doesn't need a new driver to load the nvme drive since it was created using your old ssd.
also: i'm going to assume that you want 500gb instead of 256gb and i think dd is likely going to give you 256gb since it also copies free space and your drives are not identical; if none of the resizefs commands work, then you'll be stuck at 256gb. in your shoes i would use that live distribution to create that partition (or better yet an lvm) like you already did; copy the data that you want with something like rsync; install grub and allow that bootloader installation to create a new ramdisk for you.
this way you're guaranteed to get all 500gb; the bootloader and ramdisk have the necessary bits to run your nvme and installation; plus, if you go with lvm, you're future proofed since you can add drives to that lvm into perpetuity with each new drive increasing its size or retire old drives without having to remove/re-create the volume and all without having to do any of it again.
Yeah, that'll work. Gparted should wipe the destination disk for you and set the boundaries and such. Should be super easy. You can find guides online as well.
Clonezilla is also a super easy route.
Just picked up a 128GB USB A/C stick that can go on my keyring. What are some things I should put on it to have access to at all times?
I already have self hosted services accessible over my VPN, so this would be for when I can't access that.
I'm thinking at least Ventoy and some common ISOs, then I'm not sure what else.
The reason you're struggling to think of anything to put on it is because you don't need to be carrying a USB drive.
No aircraft cabin crew have ever put out a call asking if there are any Linux sysadmin onboard with a copy of GParted Live v1.5.0 for 32bit ARM devices .
You'll carry it until the plastic cracks and it falls off your keyring.
So don't put anything too private on there.
Isn't it just far easier to transfer documents using one of the thousands of cloud apps though? Since Dropbox and such became a thing I've not had a use for USBs. If it's privacy that concerns you then you already mentioned self hosted services and I'm sure there's a few Dropbox clones among them.
There's not much point in survival PDFs unless you're also carrying a laptop to view them on.
If you really do want to go full apocalypse prepper then track down an archive of Wikipedia and various how-to websites.
Sure, for devices that already are logged in then yes. But to log into my Proton Drive I have to enter my password and authenticate with my Yubikey and it might not be a trusted computer, or the internet connection might be slow. And my self hosted services including my Seafile are behind a VPN so I'd have to log into my VPN on that PC to access them. I definitely transfer files by USB on occasion.
I guess I can put a VPN config file on my USB in the encrypted folder so I can connect to it from any trusted PC
Another common use case is for when I need to give someone else a file when we're in the same room. It's not worth the hassle of trying to transfer it over a network or wirelessly, especially if they are large files or we are on a different OS/ecosystem.
The USB stick just works.
You could get a very very old ebook reader from a yard sale. You get something functional and a lot of them act like a USB drive.
Plus a very small solar panel can charge it.
Do you have a link to the survival PDFs? I'm curious
I have a few apps like that installed, such as first aid for example. Might as well get some useful guides on my USB in case my phone is dead.
Also my recommendation
https://www.reddit.com/r/Survival/comments/732c79/ive_collected_a_bunch_of_free_survival_pdf_links/
Original Zip link is dead but someone in the comments recreated it. No idea if they're any good, hopefully I'll never look at them
No idea if they're any good, hopefully I'll never look at them
Well, better to be prepared. When you are starving and freezing from cold in a forest, lost and about to be mauled by a black bear, it's nice to have that stick around so you can quickly grab it and shove it sideways up in the arse of the bear.
No aircraft cabin crew have ever put out a call asking if there are any Linux sysadmin onboard with a copy of GParted Live v1.5.0 for 32bit ARM devices .
The grizzled greybeard spoke up, brandishing his weathered USB drive above his head like a sword. "I can do it. I'm a sysadmin."
"Oh, thank God!" the flight attendant sighed. "It says something about booting, I'm not sure. Nobody here knows Linux."
"I’d just like to interject for a moment." the greybeard interrupted with a raised finger and a self-satisfied expression. "What you’re referring to as Linux, is in fact, GNU/LInux, or as I’ve recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX."
He shifted his bulk to block one of the other passengers, who was screaming behind him that nobody cares. The pilot was now standing behind the flight attendant, begging the sysadmin to come up to the cockpit, but the greybeard was undeterred. "Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called “Linux”, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project. There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates t—"
The sysadmin never finished his sentence; the airplane smashed into the ground and all aboard were killed instantly. The impact somehow caused the GNU/Linux device to reboot correctly before it too was smashed to pieces a fraction of a second later.
When I last had an everyday carry USB stick (5+ years ago) I found I never actually used it for anything.
I had Ventoy and some practical ISOs, and PortableApps with a bunch of useful software (firefox, foobar2000, GIMP, notepad++...) for when I was using someone else's Windows PC.
...think I stored like two word documents on it, ever.
Sorry for the noob question.
I know Gparted helps format disks and stuff, but what can you do with it on a USB stick? Is it to format and partition other computers you come across? And how did you get Gparted on the stick itself?
Thanks
just for when i screw up the partitions on my computers
you can get a live iso image with gparted 😀
I don't really carry one anymore, but the one I have at my desk has Ventoy and LMDE on it for when I need to mess with something requiring my system to be down or modify my OS partition. I don't really do much on other PCs except when I have to help my wife with something.
When I was working at my last job I carried 2-3 with a ton of database backups and proprietary software and firmware files for clients' automation systems. Kinda don't miss it at all, but it sure made me feel important, lol.
Mine is a durable, metal 128GB stick. It lives on my keyring and has a relatively recent copy of Arch on it. It's handy for fixing broken laptops and rescuing data. A friend has a more advanced one, with multiple distros on it for different diagnosis options.
The rest of the disk space is just xfat.
My "everyday carry" isn't a USB stick, but it can act as one - and much much more: I always have my trusty Flipper Zero with me, and the image I carry in the mass storage emulator is the Linux Mint installer, with extra space in the image to store small files.
To be honest, the Flipper Zero's mass storage emulator turns it into the slowest USB stick you never saw. But in a pinch, it's there and it's usable. I use my Flipper for a variety of other things all the time - including, with my laptop, as a presentation remote and secondary mouse - and I almost never need a USB flash drive. So slow though it is, it's enough for when I do need one.
Based on an ultra-low-power STM32 MCU for daily exploration of access control systems and radio protocols. Open-source and customizableflipperzero.one
Well if you don't have an actual use case for it, don't try to artificially find one.
The only thing I use USB sticks for nowadays is for OS installs.
For everything else their write speeds are slow (even the more expensive USB sticks slow down to a crawl after what feels like not even one complete overwrite) and they are unreliable.
Sure, if you want to carry around random OS installers and live environments, go for it. I personally don't have a use case for it.
The only solid reason I can think carry anything on a USB stick is if you're going to be in an area without Internet. If you're in an IT role where you're interacting with end-user machines all the time, then the answer would obviously be some sort of live environment to troubleshoot or fix issues. In that case, load a Ventoy partition with a few different images, and and be done with it I guess.
If you're thinking like a Prepper or whatever, keep a copy of Wikipedia, and some survival books maybe? Maps? That's all I can think of. If you're going this far, better carry a backpack with portable solar panels, a large battery, and a lifejacket. None of this matters when you don't have food and water though, so...
What's on your "Everyday Carry" USB stick?
Before Google Drive and Syncthing I relied on such a USB device. Today, no matter what I put on the stick, it's outdated or entirely not what I need when I need something.
Having any stick on hand, and being able to flash an image from your phone, that's nice
What kinda question is that? Seems pretty judgemental to me.
Some people are "the computer guy" for a BUNCH of people, and if your usual pocket arrangement allows them there are a bunch of tools you can use for different jobs.
It's just a different kind of pocketknife at the end of the day. I don't interact with nearly enough people to need one, but I can definitely see the possibilities.
This seems like a question that 90s people would ask. "What are you doing with your life that necessitates carrying a globally-connected supercomputer in your pocket?"
In different use cases I can see plenty of times where a bootable USB drive can mean you can use your own computer from any other machine. Which is super cool. It's gonna be a much slower version of it, obviously(because of USB read/write, but pretty cool that you can carry a full copy of your system, settings, documents, and programs than can sync to/from your regular backups.
Or another with copies of other boot level tools to have on hand. If you help a bunch of people with covering from microshit to Linux, then keeping a LiveISO on hand for them to try out and install seems like a good idea to keep around.
There's just so many reasons why you would ask this. Personally I don't, but if I did I would like to think I could ask the question.
If nothing else, it's interesting to think about for sure. Now I kinda wanna imagine what kind of stuff is even possible to run like this that would be useful to me.
I only own one such at all, and I've only used it a very few times. Once to install my own OS, once to install a different one I leave at my brother's house because his laptop is having issues and I go over there to watch movies with him, and once to install that same one (Mint in those cases, Pop for mine) on my parent's computer.
If I find a good enough use case, I would start carrying at least one. But for now I just rewrite this one for whatever things I need at the time.
Honestly, carrying around a usb drive is generally a pretty good idea. I carry one with several ISOs so I can rescue a machine if something happens and I am unable to fix it (and also show people what modern Linux has to offer).
This is something I carry pretty much anywhere I take my computer, and would recommend to most people. Sure, I could leave it at home, but if I have to meet a deadline, I don’t want to spend the extra hour driving to my house. It’s a worst case scenario kind of thing, but it pays off considering how little effort takes.
Lots of people have already mentioned Ventoy.
MediCat is Ventoy with a ton of images and a config file.
It seems great, although I chose to roll my own as MediCat had a lot of Windows-centric images i have no need for.
A toolkit that helps compile a selection of the latest computer diagnostic and recovery tools.medicatusb.com
Ventoy and...
Clonezilla, (custom) ArchISO, Tails
the stuff you might need to safe other people's PCs sigh ...
HBCD_PE, Windows 11
If I hadn't included those in my ArchISO already I would probably add..
one of the usual Rescue ISOs, GParted Live.
Bonus points for Ventoy's ISO partiiton doubling as simple storage.
PS: Thanks for the reminder to update some of them again.
Sorry about the negativity from so many people.
You do what works for you.
I've got a 15 year old SD/USB combo card on my keychain. I plugged it into a TV around 6-7 years ago because there were a couple of kids movies on there.
I also know I have some Portable apps on there, but probably a little out of date
lol, I feel you there. I got a ruggedized, waterproof USB stick about 6 years ago to keep on my keychain and I've used it maybe three times ever. Though I've also been working from home for the last 4+ years so, y'know, less opportunities to use it in general.
Better to have it and not need it than need it and not have it, though.
A metal 128 GB USB on my keychain next to the U2F key
16 GB Ventoy partition with:
- Clonezilla ('deploying' my system image and backups)
- Mint Debian Edition (everything needed to test and recover my Debian systems)
- Debian netinstall
- Various manuals and reference documents
- Portable CrystalDiskInfo and VeraCrypt for Windows
- Dumping grounds for files that I intended to transfer between machines, particularly the XP retro gaming rig
- An optimistic IF-FOUND.TXT
- KeePass
- Previously Windows, until once upon a time, I booted into WinRE via Ventoy, got confused between X:\, C:\, and whatever else, and proceeded to nuke my USB instead of another disk. The Windows installer lived on its own USB happily ever after.
And a LUKS encrypted partition in the remaining space with more documents and a backup of almost all of my photos.
I had one:
A toolkit that helps compile a selection of the latest computer diagnostic and recovery tools.medicatusb.com
Eh...
Ventoy (on a comically small external hd -- 8 GiB) and retrogaming/backup-related files on a 1 TB one.
Git repos of some helpful scripts and configs.
Music.
Profile backup.
sci-hub and annas-archive
I want to be less reliant on Wikipedia and Google Scholar, but in truth I still use them a lot
I am not the person you are replying to.
I read a lot of papers and it is hard if you don't have background knowledge of the subject. If it's something I am really interested in, then I will dive deep, if it's not I will probably let it go when I get to the point where I no longer grasp what's being said.
Wikipedia editors are petty and incredibly biased. Start reading the talk pages, especially on controversial articles, and your opinion on Wikipedia’s objectivity will rapidly plummet.
Also, it’s a bit like reddit: you find yourself learning so much about new topics, until you start reading about things you have actual expertise on, and you realize the people writing this shit are uninformed idiots, and, when you try to fix the information, the petty nerds who control it revert your changes and ban you.
The same way as topics in my field of expertise, of course.
YouTube.
I'm going to think about that and get back to you. I think it's mostly intuitive, based on many years of experience, but I'm not sure at this point.
I also have to mention that I was half joking. I don't use YT all that much for my profession. I would, but it's just not entirely relevant.
You know that channels can curate which comments they have visible on their videos? Mostly this is used to silence hateful comments, but it's just as easily abused to remove all differing points of view.
If all the comments agree, you're probably in a curated bubble.
Not the other guy but I learn a lot of high quality information of YouTube. The golden rule for me is longer-form video is generally higher quality. People that know what they're talking about typically aren't going to explain complex things in 30 seconds, or at least not to the depth you should understand it.
Aside from that, I look for people with actual qualifications first. Example, I love psychology so I will look for psychologists, licensed professional counselors, and so on. I'll even listen to life coaches, but more selectively.
The lower on the "chain" they are, the more I will do "spot checks" on information and see if they know what they're talking about (ESPECIALLY if they're making big or new claims about something). For that I'll look into peer-reviewed studies and such for that.
Once you get a small knowledge base it's a little easier to continue. Talk something you have a clue about, and watch a video with that topic from another content creator.
Do all of this for a while and you'll find what you need to.
Go with people who are willing to use their real name, a lot of times it'll be in the channel description, or sometimes in a channel trailer or intro video. Sometimes in an interview some other outlet/creator has done on the content creator. Then google that real name and check their work history and education credentials. You can usually find a LinkedIn. If they're a proper academic, their university will usually have a brief page on them on the official university website. If they're an alumni, they can sometimes be found in an alumni list, various class lists, or publicly accessible projects they worked on, though not always. Work history often cannot be as easily verified, but sometimes can be if you dig a little. Depends on field.
It's not too different from what you'd do if you wanted to hire someone to work for you in a small business or something.
Once you have a significant knowledge base yourself, you can start to use the sniff test, though that's always far from perfect. Less time consuming though.
I highly disagree with looking for the widest set of opinions. Some opinions are stupid and/or baseless and just muddy the conversation (that’s part of how you get screaming talking heads on cable news shows).
Personally I look for those with expertise who speak to their expertise. Just because someone has an advanced degree in one field does not mean their opinions in other fields are worth listening to. Also, I do a gut check. If is smells like BS, such as unfounded blanket statements or it seems like they’re pushing/selling something, I look into their qualifications a bit more or find someone else.
Finding a trustworthy source is the hardest part. I generally avoid anyone speaking too loudly of the subject. Someone who’s knowledgeable and confident, most times, can present calmly with context that’s accessible to most people.
Neil deGrasse Tyson is a good example. He’s a good place to start for a broad range of topics. Then if I want more details I can dig deeper on my own. A lot of times, his commentary requires digging deeper because he speaks too broadly.
I always check the source of a report or article; if there is no source, I don’t trust it. The source is usually a good place to ‘bookmark’ for further research.
Edit: a few days later and I’ve come across the perfect example. Here Tyson explains “the tide doesn’t come in and out”. What I think he should more clearly say is there’s no “high tide” and “low tide”. To me, and I could be an idiot, I thought he was going to explain the action of the waves coming in and out at the cost line every 30 seconds or so. Here’s more info about Tidal Range https://oceanservice.noaa.gov/facts/tides.html
Tides are caused by gravitational pull of the moon and the sunUS Department of Commerce, National Oceanic and Atmospheric Administration
Trying to learn from 'youtubers' seems like asking for trouble.
Lectures posted on youtube etc. are different I suppose.
I'd caveat that with watch reliable well researched channels and not pop-sci or even god forbid pseudoscientific, or pseudo-intellectual channels that seem helpful but are actually BS wrapped in foil.
Any of the PBS science channels are typically good for science.
How money works, Wendover, are great for Economics stuff.
The engineering mindset, practical engineering are great for engineering related stuff.
History of the Universe,
There's probably good stuff on SEA, Astrum, PBS Spacetime? even Cool Worlds. To a lesser extent perhaps even John Michael Godier or Isaac Arthur have lots of good information because even though they are Sci-fi channels, they do hard sci-fi, so all based on established science and astronomy.
History of the earth(geological),
PBS Eons, Sci Show, History of the Earth,
History of the earth, (anthropological)
North 02
"Don't you know the Dewey decimal system?"
Sorry, stupid reference. In seriousness though, type in a topic into your library's search and start browsing, check out a few that seem useful.
I'm an academic and I find my University's library useful for finding knowledge on a new topic. If an introductory textbook exists on the subject, can be a good starting point.
For Most hobbies though, youtube is a great resource. I've gotten into woodworking and fishing, and youtube is a superb resource for information.
I was taught in school how to use the library catalog. It was considered essential, for success in life, at the time.
I actually do know how to use Dewey Decimal, if I haven't forgotten.
In these modern times, there's generally a PC near the information desk, with the browser home page set to a library catalog search tool, specific to that library.
And as someone else mentioned, we can ask the librarian for help, when we don't find what we need. I actually shortcut the process and ask for a quick lesson in how to use the search, if I'm feeling uncertain.
No, that's mice.
Dolphins are native but capable of space travel as they are far more intelligent than us. It's an understandable mistake to make.
Isaac Asimov wrote books on a wide range of topics.
Start with him
Wikipedia rabbit holes every time lol.
I am fascinated by medical stuff, especially conditions I have and similar conditions. Spent like 2 weeks reading about so many kinds of diseases.
Escalate. Start with early digestible low quality sources (AI chat bots, short YouTube videos, old Reddit threads, etc.) to build a general familiarity with the subject matter space.
Once you grasp the basic vocabulary and concepts, you know well enough what questions to ask to find more nuanced discussions and the right Wikipedia rabbit holes.
If you need more comprehensive understanding than that, use your newfound familiarity to start skimming primary sources.
Once you get more involved than deep dives into primary sources, you start blurring the lines of developing a new area of relative expertise.
Read. Write. Execute. RWX. I'm going to piss some people off. Here goes: you are wasting your time if you watch videos. At all. A video moves at the pace it plays. It is linear. You can't jump around easily. Reading? You can jump wherever you need immediately. You can have multiple sources at once. If you use a book, yes a physical book, you learn where things are and jump right to them. Read
Write down a paraphrased version of what you read. Do not copy. Include references so you can return to source if needed. Note taking is a skill. Your notes should be organized in a way you can skim what you wrote as easily as the sources themselves.
Execute. You don't learn anything unless you do it. I've had too many students who watch Khan Academy, and think they understand it when they haven't done it. They don't score well on exams. Not my fault. I told them they have to do it to understand it.
RWX. I await the flame war I just started with the video people.
It might depend from person to person? I agree with you, tho. That's also my preferred method.
However, if the stuff you're reading is fairly dense and not that well organized, you're gonna have a harder time than watching a well written educational video or lecture and taking notes along the way.
I can see where you are coming from, but that is a skill in and of itself. Go far enough into any technical field and you reach that boundary. Especially if you do research.
It's this kind of thing that develops into imposter syndrome. You've gotten this far doing things this way, and it's always worked. You are told you are smart. Fixed mind set. Maybe you aren't that smart at all. It effects your mental health dramatically. I've literally seen it hundreds of times.
But I do get it. Students are expected to perform at a high level. That approach is expedient and it works well to get everything done.
I recognize things are different than they were 'back in my day', but I was a C student. I did the bare minimum, except for the subjects I cared about. Those I was exemplary.
Now 'kids these days will' say "no that's bullshit. It doesn't work anymore'. That I can tell you isn't true. I have those students. You just need to figure out how to get around the artificial red tape that keeps you from focussing entirely on what you want.
(Sorry for sp. I haven't installed spell check on this phone)
Attached: 1 image My fellow software engineer, It's the year 2024. Please store your #Linux #desktop application configurations ONLY in `$XDG_CONFIG_HOME`. NOT in `$HOME` or other non-standard or obsolete places. May #FreeDesktop be your guide.Mastodon
A shell script which checks your $HOME for unwanted files and directories. - b3nj5m1n/xdg-ninjaGitHub
Take some of that money you get fining surveillance-capitalists and use it to fund privacy-respecting libre alternatives
Nice idea, I love. But you have to remember, those investigations cost huge time and money. When you consider the cost of a full-time staff over the 10 years, you include. Plus the cost of building a case against some of the largest cooperation. All before any court costs are considered.
We are likely better off having that money reinvested in preventing other companies from these practices.
I don't think the issues you raise are valid.
It costs more than zero to levy a fine, but we are talking about many billions in income here. Your point would be valid if gathering staff and then fining Google 9 billion were a net zero. It isn't.
We are likely better off having that money reinvested in preventing other companies from these practices.
Which is what I suggested. The best way to prevent these practices is libre alternatives.
All the cloud services are built on top of free software, and big tech just built user interfaces on top of them.
That wasn't the intended purpose of course, to support billionaries becoming multi billionaries. So maybe a percentage of the profits should go back to the makers.
These past two weeks were big for Wayland accessibility support, as Nicolas Fella did a lot of work to improve support for sticky keys to equal the state they were in on X11. This work is not compl…Adventures in Linux and KDE
Hi,
I've noticed something quite odd, but I don't know if the problem come from Linux itself or nginx..
In order to grant nginx access to a directory let say your static
see: https://stackoverflow.com/questions/16808813/nginx-serve-static-file-and-got-403-forbidden
These parent directories "/", "/root", "/root/downloads" should give the execute(x) permission to 'www-data' or 'nobody'. i.e.
but it seem not only the direct parent need to be given XX5 but all the chain
for example
example
└── sub1
└── sub2
└── static
others
to read and execute 5
Thanks.
Just want to help somebody out. yes ,you just want to serve static file using nginx, and you got everything right in nginx.conf: location /static { autoindex on; #root /root/downloads/Stack Overflow
Thank you all !
Indeed setting execute
perm on example, sub1, sub2, static
The program/user have now access to the directory.
In order words all the parents directory need at least execute
in order to have access in the targeted directory...
Now I gave 751 for static. Meaning than others (here nginx) cannot list the files within. But never the less it works\
the static files are appearing when requested (HTTP) but forbidding nginx to list the directory is changing something ? (performance/security)
Thanks
A big difficulty is that between the scientific discover, and the application years or even decades can occur. Look at how supra conductor have been known for 100 years and still have very few real life usage.
My thoughts tough
-physics beyond the standard model at LHC, no impacts for commoner, but would really help physics to understand our universe
More on technology/applied science
I don't think fusion would be as useful a technology as it would have been a few decades ago. Now renewables (wind, solar, hydro) seem like more and more as the clean and cheap energy of the future. The biggest problem of storage is rapidly being solved with batteries springing up everywhere.
The real problem with fusion is that even if it worked, the plants would be very complex and expensive. It would be much cheaper and reliable to build solar, wind and batteries instead.
Having operational fusion reactors would be cool as hell, but it wouldn't have that much impact on our lives in the end.
Respectfully, I disagree. We've entered an AI boom, and right now, the star of the show is in a bit of a gangly awkward teenage phase. But already, these large data models are eating up mountains of energy. We'll certainly make the technology more energy efficient, but we're also going to rely on it more and more as it gets better. Any efficiency gains will be eaten up by AI models many times more complex and numerous than what we have now.
As climate change warms the globe, we're all going to be running our air conditioning more, and nowhere will that be more true than the server centers where we centralize AI. To combat climate change, we may figure out ways of stripping carbon from the air and this will require energy too.
Solar is good. It's meeting much of our need. Wind and hydroelectric fill gaps when solar isn't enough. We have some battery infrastructure for night time and we'll get better at that too. But there will come a point where we reach saturation of available land space.
If we can supplement our energy supply with a technology that requires a relatively small footprint (when it comes to powering a Metropolitan area), can theoretically produce a ton of power, requires resources that are plentiful on Earth like deuterium, and doesn't produce a toxic byproduct, I think we should do everything in our power to make this technology feasible. But I can certainly agree that we should try to get our needs completely met with other renewables in the meantime.
While I agree with what you've said, I've always felt fusion and other such tech is the future of long distance space travel, not Earth based energy use. Wind and hydro are useless in space and solar has issues with power accumulation the further away from a star you go. We will still need some kind of "fuel" based energy source if we're ever to enter deep space and cross the gaps(unless battery tech increases much further to the point that a "battery" lasts a significant portion of the vehicles lifetime). Even then, you'd need recharge stations at each end or to park by a star to refuel in between.
We have fusion/fission now. That kind of battery tech is still a ways off. Feels shortsighted to ignore nuclear now just because it's not perfect in this specific environment. After all, name any vehicle not powered by nuclear that can run for 20-30 years before it needs to refuel/recharge. No battery tech can even come close currently.
Fusion is likely the end-game power gen tech for humanity, assuming no new physics (and excluding Dyson structures). For the long term, it likely will be the most useful way of generating mass amounts of electricity you can get, and access to more energy enables more possibilities of all sorts of things, enabling even things that are extremely impractical today due to their energy needs
For example, carbon capture becomes a possibility, and stuff like mass desalination. And then you could, in theory, go even more extreme with stuff like terraforming mars at human timescales, with enough energy. Of course this depends how practical and efficient fusion reactors actually would be, but with enough energy you can do so so much
The large storage batteries that use sodium ion. They should be able to get like 5,000 full cycles before they degrade and can be buried or stored outside. That and a solar array on a roof should let most anyone be completely off grid. Full solar house that should last for 15 years before the system needs replaced. The batteries will last longer and be cheaper than lithium. Solar panel prices are consistently getting cheaper.
I think in 5 years time there will be a lot of the electrical grid system (for most who will be still attached to the grid) just getting power almost completely from solar, and storing enough in these batteries for the nights and cloudy days.
We’ve decarbonised a decent chunk of the world’s energy profile
Unfortunately, things like AI continue to fuel our hunger for power, preventing fossil fuels from being phased out… and as such, CO2 production continues to accelerate uncontrollably.
Yes, atmospheric CO2 production continues to accelerate. It hasn’t even begun to slow down, much less reach a steady state or reverse.
And this is excluding the feedback loops (arctic permafrost, methyl hydrates, etc.) that are now beginning to cook off in nature.
We are still solidly on the “business as usual” path towards civilizational collapse by some point in the 2050s, and functional extinction by some point between 2100 and 2200.
Nuclear is going through a breakthrough and it's coming along at a neat net zero.
In a breakthrough experiment, nuclear fusion finally makes more energy than it uses
The sun creates energy through nuclear fusion. Now scientists have too, in a controlled lab experiment, raising hopes for developing clean energy.James R. Riordon (Science News Magazine)
Unlock the Power of Lighter and Safer Energy Solutions with Our Solid-State Portable Power Station. Experience Unmatched Portability and Safety for Your On-the-Go Power Needs. Shop Now for Efficient and Secure Power Solutions.Yoshino Power
I don't know about air travel. For comparison, Li batteries are about 200-300 Wh/kg, with solid state reading 3-4x that.
Jetfuel is 11000 Wh/kg. Hydrogen is 39000 Wh/kg.
By volume they might have an advantage but planes tend to care more about weight
I second the lemmy saying there is a considerable gap between discovery and implementation.
But to answer your question, I believe we are due some major breakthrough regarding psilocybin and other psychedelic substances which have been banned since the 60s. Research is well underway and with our current technology + knowledge in neuroscience we're due to catch up quickly, unless everything gets tangled in too much red tape.
Improvement in mental health has a pretty immediately impact in our lives after all.
Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.m.youtube.com
If you don't tell your sea life who's doing the polluting, then transgender communists will.
Please. Talk to your local sea life about pollution.
(Brought to you by the Committee for Orca Attacks)
In the next 5 years?
I'm skeptical. In the next 20, sure.
Just a note: Superconductivity is not only destroyed by temperature, but also by magnetic fields or a too high current. We might find a room temperature superconductor that is basically useless for energy transportation or high magnetic field applications.
Another problem: almost all known high-temperature superconductors are ceramics and thus very brittle and hard to work with.
What we want is a cheap, metallic, high temperature superconductor with a high maximum critical magnetic field and high critical max current density...
But of course any improvement could give big improvements in some applications. Having a nitrogen cooled MRI wound be awesome.
An6 form if room temp super conductor would be awesome. Like electronics will stop emitting heat and in case of ICs and microprocessors, difficulty to work with won't he an issue as you fab them.
Also school level science experiments will get more exciting
Estimates of food emissions can range from one-quarter to one-third. Where do these differences come from?Our World in Data
I came here for this one.
I don't follow news on alt-meat super closely, but I thought the scalability thing was not much of a concern, and I thought we were mainly at the steps of raising demand and meating or beating price parity with the real thing.
Something like an alt-meat only fast food spot or something trendy that can only be done with lab created protein could be all it takes to cement its place in society, so I feel we're just waiting for that tipping point due to product that mainly already exists.
I knew you guys would pick up on that one!
The spell checker really did not want me to make that little joke, but I knew it would be appreciated.
In my defense, I’m half asleep, and due to lack of caffeine, didn’t notice the bit about “which could actually happen in the next 5 years.”
So with that in mind, I’ll say something about environmentally friendly raw materials for super efficient battery storage.
MRNA vaccines for cancer, HIV and others.
Moderna clinical trials have been real good.
Imagine getting a cancer diagnosis, then 30 days later getting a tailored treatment that eliminated the cancer.
Also vote. Because one party system has decided to side with anti vaxxers. The other has not. Cancer
numbers have been steadily rising, second only to heart disease as a cause of death. There is a solid chance you're going to get cancer.
mRNA-4157/V940, in combination with KEYTRUDA, demonstrated a statistically significant and clinically meaningful reduction in the risk of disease recurrence or death compared to KEYTRUDA monotherapy in stage III/IV melanoma patients with high risk of…investors.modernatx.com
Voting is mandatory here. So no need to tell me to vote.
And I'm not sure which party is anti Vax.
For everyone? Nuclear Fusion is on the cusp of reaching net zero emissions. Meaning we can create massive amounts of clean energy. Right now, we use nuclear energy off of
Nuclear Fission creating hazardous waste and resulting in excess heat/waste
Nuclear Fusion would allows us to create clean energy with the goal of being net zero
Nuclear Fusion and "net zero emissions" doesn't really make sense.
What I think you are trying to say is that fusion is nearing the point where net energy is possible (that is getting more energy out then the amount of energy put in to create the reactions in the first place). Fusion is not practically close yet, but there are tantalizing hints that we are close.
See this from 2022; the national ignition facility produced more energy that was impacted on the target (2MJ in 3MJ out), but this doesn't take into account the huge inefficiencies in the laser generators to produce that 2MJ laser pulse.
There are a bunch of fusion experiments that are hitting massive temperatures (120 - 150MK) which is starting to get into the range where practical fusion could occur, the center of the sun is approx 15MK but also has massive gravity to encourage fusion.
So fusion is still a decade away at least, but we understand the science much more completely now. We know the problems (well a bunch of them) and it is mostly now a very difficult engineering problem rather a problem of understanding the science.
China has achieved a new milestone in humanity's experiments to harness the power of the stars.Tessa Koumoundouros (ScienceAlert)
Specifically, I think the abilty to make hydrogren from renewable resources at large-scale will change everything. Hydrogen fuel cells are more space efficient, and require less toxic manufacturing, when compared to current renewable energy generation and storage methods. If hydrogen is seen as cheaper or more green than other power sources, it will change the market completely.
Hydrogen generation is also an active research area, and just this year they've have some promising results for renewable hydrogen.
High temperature superconductors.
Specifically anything above commercial / household freezer (-18C); but if we could get to ~105C (above water boiling) it would change literally everything.
Electric motors become more efficient over a much greater RPM range.
Superconducting magnets become much easier to construct and run, this gives us a much better chance at fusion.
Transmission lines themselves are pretty efficient as it is, but all of the associated switchgear at the conversion points all gets really warm, this could be virtually eliminated.
The conductors on circuit boards, and potentially inside microchips. This reduces heat loading and thus makes all computing devices more efficient.
The conductors in batteries; enabling these to be smaller and thus increasing battery energy density.
Finally making super-capacitors actually viable as longer term energy storage.
There are so many aspects of life that would be impacted by this one breakthrough, that it is probably the most important thing that will happen this century (scientifically speaking). It would be almost as revolutionary as when electricity itself became widespread.
Don't get me wrong.
I absolutely love Fedora Atomic (Silverblue, Bazzite, Kinoite, Aurora, IOT, etc.), more than any other distro I used, and I plant to continue using it.
It never made any problems on any of my devices, and because it is pretty much indestructible and self-managing, I even planned to install it on my Mum's new laptop, in case her current one (basically a toaster with Mint on it) breaks.
But with the last days, my trust is damaged quite a bit.
First one, where I couldn't update anymore on uBlue, because of faulty key pairs.
This is a huge thing for me because uBlue updates in the background, and if I wouldn't have read it here on Lemmy, I would have found out way too late, which is a security risk imo.
And now, my devices weren't able to boot anymore due to some secure boot stuff.
Again, if I wouldn't have subscribed the Fedoramagazine, I would have noticed it way too late.
I was able to just boot into an older image and just paste a few commands from the magazine's post, and it was resolved in just seconds (download time not included).
Both instances were only a minor thing for ME.
But both would have been a headache if I wouldn't follow those blogs, which is a thing only nerds (like myself) do.
Nobody else cares about their OS, it is supposed to just work, hence why I use Atomic.
I don't wanna blame the devs (both j0rge/ uBlue and the Fedora team), they were very quick, transparent and offered very simple fixes.
And, being able to just boot into an older image, just in case, is something I am very thankful for, but nothing I want to depend on.
Having to be informed about stuff like this and then having to use the CLI is just a no-go for most people.
Am I over-reacting about this too much? What's your view on those things?
You are completely right, they've dropped the ball. Of course it's open source, so the devs are not duty bound to keep the system running well. it's just that my trust is shaken that I could just set up grandma's computer with this and not need to maintain it..
These days even Apple and Microsoft struggle with testing their updates and pushing out updates that are not broken or system breaking. Maybe the grans of the world should just become more tech savvy. ;)
Then again if long term Fedora immutable systems only fail like this once every two years, then we are not really worse than needing to deal with Windows rot.
and how do I react next time they don't greet me?
I started working at this department 3 weeks ago. I went into the office I now work at, greeted 2 coworkers I've already worked with, they looked at me, said nothing, kept talking to themselves.
How am I supposed to interpret that?
To me this is disrespectful, maybe you disagree?
Then, as I was working, I saw both of them staring at me. What am I supposed to do when that happens? To me this signals hostility and passive aggressiveness.
I separate private life from personal one but even I know that the least you can do is to greet your coworkers, unless you want them to quit.
Just remember that you all are cogs in the machine, and also nobody owes you anything - including a greeting.
My guess is your workplace has a low personal life/low banter culture or even policy. If thats the case, you may be talking to people who know this and dont want to get in trouble, or to people whose souls are crushed and theres no life behind their eyes.
Don't take it personal.
I've worked in both kinds of environments. I prefer high banter/high friendship environments but i work fine in either.
Your options are essentially to deal with it or ask them what's up.
Personally I wouldn't waste time with co-workers who are being rude.
They could be jealous of your position for some reason or taken offense to something random you did. They may just be assholes.
At the end of the day you're there to work and if it doesn't affect that I wouldn't bother.
If they are increasingly hostile maybe have a quiet word with HR.
Is it possible that they greeted you nonverbally since they were already in a conversation?
Don't let it get to you. They'll come around if you keep up the positive vibes. You're also new, so you'll be learning the behaviors of people you barely know. It's also possible that these folks are quite friendly, but maintain a strict focus when they aren't taking a break. There are a million different reasons why they didn't verbally respond, so don't take it personally.
The way I greet people at work is a basic "hey" and wave as I walk by, if they don't seem too busy. If they respond I hear it, if they wave I'll hopefully see it, and if they do nothing then I've already walked past them.
If this was a single occurrence, I'd try not to read too much into it. Maybe they were discussing something private and got all weird when interrupted. Maybe the greeting was non-verbal and you missed the cue. If it's the beginning of the day, they might not be all awake yet, I dunno.
But if it's a pattern, or this ever happens and it bothers you, you can try to make the most of it. Imagine they wished you their fondest greetings in a Muppets style voice. It costs you nothing and you can't change anyone else's behaviour anyway, might as well do something to put a smile on your face.
Have you previously annoyed them in some way, or had a disagreement?
Were they in the middle of a proper conversation and might’ve felt like you butted in with a greeting?
It sounds odd. I’d have a think about whether you’ve previously annoyed them in some way, but if not then they might just be grumpy. In which case there’s nothing to worry about, and you just do you.
OP's last post is about shouting at someone who they felt was acting like they were their manager but wasn't, then feeling upset that no one asked for their side of the story.
There's more going on here.
Why even be offended by such a small thing? What's even the point of greeting though? I can see it makes sense when you want to talk to someone. The "hey", "hello" or whatever can work to grab attention and they can acknowledge that by responding back as opposed to immediately talking and the other person missing part of or the entire first sentence, if not more.
But otherwise, there's a good chance it's distracting or even distressing.
I usually try to greet back unless it's awkwardly late because I didn't expect it, as such it caught me off guard and I was thinking what I am supposed to do and what they want from me for too long.
But this generally makes me forget what I was just thinking of or what I was doing and makes me anxious. Then I may even be thinking of how I handled that for the next few minutes. I hate that.
Don't get me wrong, I am not mad people for greeting me, I know they just do that, but I'd rather not them do it. And as such I won't greet anyone either unless I need to talk to them. I don't want to cause same issues to others only to say "Hi".
they looked at me, said nothing, kept talking to themselves.
Yep. Sounds familiar. "Am I supposed to say something? I am paying attention, I am looking at you. Go on. Oh, nothing, OK..."
People are making some good points about cultural background differences and asking whether you have history already.
Others say, keep doing what you do and don't let them get to you. I want to jump on that bandwagon, this is going to sound silly cringe but...
... greeting in a polite, confident and friendly manner asserts social dominance. You have no fear. You are the initiator; you take the lead. Be that person.
Greet them first. Move on with your day.
They're starting at you because you're attractive, but also too crazy to befriend.
Oh wow, yeah. Leaving that out is a big red flag.
Look into The Missing Missing Reasons
Members of estranged parents' forums often say their children never gave them any reason for the estrangement, then turn around and reveal that their children did tell them why.www.issendai.com
even I know that the least you can do is to greet your coworkers
Greeting coworkers is definitely not obligatory, and neither is responding to a nothing greeting. It's unlikely there's any hostility or passive aggressiveness.
It's a ritual you're used to. It doesn't mean it's one they're obligated to reciprocate.
I went into the office I now work at, greeted 2 coworkers I've already worked with, they looked at me, said nothing, kept talking to themselves.How am I supposed to interpret that?
I think you should interpret it exactly how it sounds like
It may or may not be fair. Personally, with a very few exceptions, I dislike coworkers and want as few interactions with them whether positive or negative. I just don't care. But regardless of that, your coworkers are there because they have to be, and if they've decided they don't want to interact with you and are now letting you know, that is their option, whether it's fair or not.
Why are they talking to themselves? Are they busy and don't want to be distracted?
Or do you mean two people were talking amongst themselves?
~~You're the kind of person I would just stare at if you greeted me at work. Why you gotta be like that?~~
Edit: My bad. Hi coworker!
None of us are in your shoes so it's really tough to say what your coworkers' motivations are, but at the end of the day you are yourself, you are in charge of your mental and physical well-being. When someone else does something minor and it affects you strongly it's time to stop thinking about them and start thinking about what's happening in your own body.
Unfortunately your emotions, like being offended, aren't entirely in your control. There are a lot of brain connections rustling around up in your noggin that don't pass through the filter of your consciousness.
The best advice I can offer is to redirect yourself when you start to get offended. Pick a favorite topic, something that you like to think about often, and "switch" to it when you feel yourself getting triggered.
As for how you should act when you aren't greeted directly? I see no reason for you to change your behavior, just act as though nothing happened, because nothing did happen
I cannot definitively know that you, or anything else exists. I am stuck within the context of my interpretations of the data at hand and even that data cannot be considered beyond refute. So, if I am stuck in this "simulation", then how I interpret and interact with the "simulation" is up to me.
OP interprets a lack of response as a slight. Maybe it is. Maybe those people cannot stand OP. Maybe those people only heard a mumble coming from the general vicinity of OP. Maybe those people were having an all consuming conversation that OP's presence could not disrupt. Whatever. Ultimately tje only that matters is how much weight OP gives to that set of input data because no matter what anyone does, nothing can truly interact directly with OP.
Stop caring. Seriously.
These people aren’t your friends. You just need to get along well enough to do your jobs without hating each other.
If they are so miserable that they can’t engage in simple social queues then it’s their problem. If they want to be friendly then be friendly. Otherwise fuck em. Move on.
If it were me I would stop greeting people. Stick to myself. Concentrate on what is important which is putting food on the table. Be cool with the people that are actually cool and leave the rest to be miserable.
Yes this is disrespectful.
My advice would be to stop greeting them until they notice you. Occasionally try again, but not every day.
both of them staring at me. What am I supposed to do when that happens?
In this case, you say "what is it?" to them. If they continue to stare at you wordlessly, you may escalate slowly to either violence or a report to authority.
Whether you decide to self-handle it or have authority handle it, if they continue to stare wordlessly at you as you try to communicate with them, you escalate to the point where it forces them to move.
I was once at a party and I was trying to introduce myself to this person. I put my hand out to shake theirs and they fold there arms up and look away and say hello. How rude. I say what's up? They, not making eye contact arms folded, 'Nothing'.
I leave this weird game this person is playing fairly annoyed.
I walk over to a friend, who's this person who won't even look at me? 'Oh that's Tim, he's blind.'
Deze website gebruikt cookies. Door het gebruik van deze website ga je akkoord met het gebruik van deze cookies.
Interesting, but I haven't changed that either.
And, gah, it just worked normally. WTF. I haven't changed anything yet this morning.