Linux

49364 readers
7 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
1426
 
 

Locally, everything works fine on HTTP (http://192.168.1.222).

Externally, however, only PARTIALLY on HTTPS (https://mydomain:8344) through Caddy. I can connect to the site (first picture), but streams won't start.

Any idea why this is the case? My theory is that the RTSP port (554) is for streaming and that when I go to the local address (that is on 80), the site ITSELF initiates a connection to port 554 in the background. However, this apparently does not happen when I connect remotely.

EDIT: In the same Caddyfile, I reverse proxy my Jellyfin server that only uses a single port, and that works fine. The Caddy server runs on my Ubuntu Server 23 on Raspberry pi 5.

1427
 
 

Hello. I have never used Linux before in my life, but this post isn't really about the software. I know there are many guides and threads out there explaining how to set up Linux for beginners.

My question is more about what computers you guys suggest for Linux. I don't have any old computers lying around at home, I only have a computer assigned by my school that I'll turn in next year. To my understanding, Linux should be able to work on almost all computers, so I haven't thought about a specific brand.

My top priorities are (in order):

  • good/great battery life
  • quiet
  • compact and lightweight

Preferably a 13" or 15" screen, though I prefer the former. Just a small machine with a great battery life that also doesn't make much noise when several apps are open at once. I have looked at Asus before, but I'm not sure what the general consensus is of this brand, so I was hoping to get some suggestions. I've also looked at Framework computers, but honestly it's a bit expensive for me. My budget is ~1000$ (10 000 SEK).

Might be unnecessary information, but: I will be using this computer mainly to write documents, make the occasional presentations, browse the web, and watch videos and movies. So no photo- or video editing nor gaming at all. Like everybody, I hope to buy a computer that will last many years and survive many student theses. Cheers and thanks!

1428
53
submitted 6 months ago* (last edited 5 months ago) by WeebLife@lemmy.world to c/linux@lemmy.ml
 
 

Hi everyone, I am wanting to gradually make the switch from windows to Linux on my daily use desktop. I figured the best way would be to dual boot them. I have a spare drive in my desktop that I cleaned for Linux so I can have it on a separate drive from windows. Here the process I went through which ended up being unsuccessful. Removed windows drive, installed mint on seperate SSD, install was successful, installed steam and tried some games, shut down PC, put windows drive back in PC, PC wouldn't boot to windows drive but was still booting to mint, went into BIOS and selected boot over ride to windows drive, still wouldn't boot, created windows recovery USB, tried to fix boot in recovery mode, recovery media wasn't able to fix boot, booted into mint, mounted windows drive and removed all the documents I needed to external drive, nuked windows and Linux drives and did a fresh install of windows.

Afterward, I googled how to do this properly. And the posts I found detailed basically the same process I did. I would like to try again but I don't know what I did wrong and don't want to have to go through that again.

Thanks.

PS. I have an extensive library in steam already. There's several games that I have hours into and have friends that I play with, which is why I want to keep windows for the time being while I figure out how Linux gaming works.

EDIT: thanks for all the comments. It appears my problem was when I removed my windows drive from my PC when installing mint. I will try again and keep both drives in my PC. Thanks!

EDIT 2: UPDATE: I have successfully dual booted windows 10 and linux mint. After thinking about my problem for a while, i remembered an important detail. When I first built my PC, I had windows installed on a 120gb Kingston ssd. I then later purchased an M.2 and installed windows on there. That Kingston ssd is what i wiped and put linux on, so i'm thinking maybe the bootloader stayed on the kingston drive?? i'm not exactly sure, but after watching this video, I was confident that my original plan would work this time since i did a clean wipe of both drives and did a fresh install of windows on the M.2. I am now able to boot into windows 10 and Mint from the bios with no issues. Thanks everyone for your help.

SOLVED TLDR: you can dual boot windows and linux on 2 separate drives, and it is perfectly safe (and recommended in the video i linked) to remove the windows drive from the PC, while installing linux on another drive.

1429
 
 

Source
Linux currently 29.1%
Sample size according to StatCounter: 24,353,436 page views

1430
 
 

I'm proud to share a major development status update of XPipe, a new connection hub that allows you to access your entire server infrastructure from your local desktop. It works on top of your installed command-line programs and does not require any setup on your remote systems. So if you normally use CLI tools like ssh, docker, kubectl, etc. to connect to your servers, it will automatically integrate with them.

Here is how it looks like if you haven't seen it before:

Hub

Hub Alt

Browser

Local forwarding for services

Many systems run a variety of different services such as web services and others. There is now support to detect, forward, and open the services. For example, if you are running a web service on a remote container, you can automatically forward the service port via SSH tunnels, allowing you to access these services from your local machine, e.g., in a web browser. These service tunnels can be toggled at any time. The port forwarding supports specifying a custom local target port and also works for connections with multiple intermediate systems through chained tunnels. For containers, services are automatically detected via their exposed mapped ports. For other systems, you can manually add services via their port.

Markdown notes

Another feature commonly requested was the ability to create and share notes for connections. As Markdown is everywhere nowadays, it makes sense so to implement any kind of note-taking functionality with Markdown. So you can now add notes to any connection with Markdown. The full spec is supported. The editing is delegated to a local editor of your choice, so you can have access to advanced editing features and syntax highlighting there.

Markdown

Proxmox improvements

You can now automatically open the Proxmox dashboard website through the new service integration. This will also work with the service tunneling feature for remote servers.

You can now open VNC sessions to Proxmox VMs.

The Proxmox support has been reworked to support one non-enterprise PVE node in the community edition.

Scripting improvements

The scripting system has been reworked. There have been several issues with it being clunky and not fun to use. The new system allows you to assign each script one of multiple execution types. Based on these execution types, you can make scripts active or inactive with a toggle. If they are active, the scripts will apply in the selected use cases. There currently are these types:

  • Init scripts: When enabled, they will automatically run on init in all compatible shells. This is useful for setting things like aliases consistently
  • Shell scripts: When enabled, they will be copied over to the target system and put into the PATH. You can then call them in a normal shell session by their name, e.g. myscript.sh, also with arguments.
  • File scripts: When enabled, you can call them in the file browser with the selected files as arguments. Useful to perform common actions with files

Scripts

AppImage support

There are now AppImage releases available for x86_64 and arm64 platforms.

While there were already other artifact types available for most Linux systems, this is useful for atomic distributions.

A new HTTP API

For a programmatic approach to manage connections, XPipe 10 comes with a built-in HTTP server that can handle all kinds of local API requests. There is an openapi.yml spec file that contains all API definitions and code samples to send the requests.

To start off, you can query connections based on various filters. With the matched connections, you can start remote shell sessions and for each one and run arbitrary commands in them. You get the command exit code and output as a response, allowing you to adapt your control flow based on command outputs. Any kind of passwords and other secrets are automatically provided by XPipe when establishing a shell connection. You can also access the file systems via these shell connections to read and write remote files.

A note on the open-source model

Since it has come up a few times, in addition to the note in the git repository, I would like to clarify that XPipe is not fully FOSS software. The core that you can find on GitHub is Apache 2.0 licensed, but the distribution you download ships with closed-source extensions. There's also a licensing system in place as I am trying to make a living out of this. I understand that this is a deal-breaker for some, so I wanted to give a heads-up.

The system is designed to allow for unlimited usage in non-commercial environments and only requires a license for more enterprise-level environments. This system is never going to be perfect as there is not a very clear separation in what kind of systems are used in, for example, homelabs and enterprises. But I try my best to give users as many free features as possible for their personal environments.

Outlook

If this project sounds interesting to you, you can check it out on GitHub! There are more features to come in the near future.

Enjoy!

1431
1432
 
 

I work with a client that migrated their infrastructure to Microsoft. In order to connect to their Linux Server, I now have to Remote Desktop to their Azure Virtual Desktop thing. I'm not pleased but it's out of my control.

I tried remmina freerdp but doesn't seem to support that Azure thing, there doesn't seem to be an option to add the workspace.

Any recommendations or do I have to setup a virtual machine just for this? :/ Cheers

1433
1434
1435
1436
25
submitted 6 months ago* (last edited 6 months ago) by bonus_crab@lemmy.world to c/linux@lemmy.ml
 
 

First of all, thanks to everyone who came out and offered their suggestions and advice yesterday when I asked about setting up my UM890 from minisforum in RAID0.

Many called me mad for going for RAID0 here but ... shrug its not my only computer so I'm ok being a bit risky here.
My original plan was to backup, set up the drives in raid, install nobara on the raid array, and ride off into the sunset.

That was a bad plan.

Timeshift froze while backing up, and worse, back in time froze while restoring.
Repeatedly.
Even when booting from a live usb and without enabling RAID.
Wasted several hours trying variations of that ... my USB drive is kinda slow.

On my last post someone suggested I simply add the new drive to an existing btrfs file system, then switch to a raid0 profile.
That was a good plan, and ultimately what I ended up doing after my plan failed.

Resources :
https://wiki.tnonline.net/w/Btrfs/Adding_and_removing_devices
https://www.ubuntumint.com/add-new-device-to-btrfs-file-system/
https://serverfault.com/questions/213861/multi-device-btrfs-filesystem-with-disk-of-different-size

Commands I ran on my machine :
sudo fdisk -l
sudo lsblk
sudo mkdir /mnt/drive1
sudo btrfs device add /dev/nvme1n1 /mnt/drive1 -f
btrfs filesystem df /
sudo btrfs balance start -dconvert=raid0 /

Performance
In case the image doesn't show, 6230MBps Seq Read, 758Mbps 4k Rnd Read, 4496MBps Seq Write, 292.6 4k Rnd Write.
6229 read 4497 write

PS
Even though my restore failed, the files were all there so I didn't lose anything , I just had to reinstall all my programs and such.
Also enable nvme raid in the BIOS before you do anything else.

1437
1438
 
 

The following command works even though I really don't think I should have permission to the key file:
$ openssl aes-256-cbc -d -pbkdf2 -in etc_backup.tar.xz.enc -out etc_backup.tar.xz -k /etc/ssl/private/etcBackup.key

I'm unable to even ascertain the existence of the key file under my normal user. I'm a member of only two groups, my own group and vboxusers.

The permissions leading up to that file:

drwxr-xr-x   1 root root 4010 Jul 31 08:01 etc
...
drwxr-xr-x 1 root root      206 Jul 14 23:52 ssl
...
drwx------ 1 root root    26 Jul 31 14:07 private
...
-rw------- 1 root root 256 Jul 31 14:07 etcBackup.key

OpenSSL isn't setuid:

> ls -la $(which openssl)
-rwxr-xr-x 1 root root 1004768 Jul 14 23:52 /usr/bin/openssl

There don't appear to be any ACLs related to that key file:

> sudo getfacl /etc/ssl/private/etcBackup.key
[sudo] password for root: 
getfacl: Removing leading '/' from absolute path names
# file: etc/ssl/private/etcBackup.key
# owner: root
# group: root
user::rw-
group::---
other::---

> sudo lsattr  /etc/ssl/private/etcBackup.key
---------------------- /etc/ssl/private/etcBackup.key

Finally, it's not just the case that the original file was encrypted with an empty file:

> openssl aes-256-cbc -d -pbkdf2 -in etc_backup.tar.xz.enc -out etc_backup.tar.xz -k /etc/ssl/private/abc.key
bad decrypt
4047F634B67F0000:error:1C800064:Provider routines:ossl_cipher_unpadblock:bad decrypt:providers/implementations/ciphers/ciphercommon_block.c:124

Does anyone know what I've missed here?

1439
343
submitted 6 months ago* (last edited 6 months ago) by Steamymoomilk@sh.itjust.works to c/linux@lemmy.ml
 
 

As a advid user of lightburn for my business, this truely saddens me.

I loved being able to have the freedom to run linux and have 1st class support.

Lightburn states in this post, about how linux is less than 1℅ of there users. They also state it costs lots of money and time to develop for each distribution. To which i gotta ask WHY not just make a flatpak or distribute source to let the community package it. Like its kinda dumb to kill it off ive been using zoronOS for 3 years running my laser cutter! And it works bloody great!!!! The last version for linux will be 1.7 which will continue to work forever with a valid liscence. I do not plan to switch back to ~~windows~~ spyware or ~~MAC~~ overpriced Unix. I hope the people at lightburn reconsider in the future, There software is the best software for laser cutters period. And when buying my laser cutter (60watt omtech) i went out of my way to buy one with a rudia controller as it is compatible with lightburn.

--edit just got the email this is what they sent

"To our valued Linux users:

After a great deal of internal discussion, we have made the difficult decision to sunset Linux support following the upcoming release of LightBurn 1.7.00.

Many of us at LightBurn are Linux users ourselves, and this decision was made reluctantly, after careful investigation of all possible avenues for continuing Linux support.

The unfortunate reality is that Linux users make up only 1% of our overall user base, but providing and supporting Linux-compatible builds takes up as much or more time as does providing them for Windows and Mac OS.

The segmentation of Linux distributions complicates these burdens further — we've had to provide three separate packages for the versions of Linux we officially support, and still encounter frequent compatibility issues on those distributions (or closely related distributions), to say nothing of the many distributions we have been asked to support.

Finally, we will soon begin building LightBurn on a new framework that will require our development team to write custom libraries for each platform we support. This will be a significant undertaking and, regrettably, it is simply not tenable to invest our team's time into an effort that will impact such a small portion of our user base. Such challenges will only continue to arise as we work to expand LightBurn's capabilities going forward.

We understand that our Linux users will be disappointed by this decision. We appreciate all of our users, and assure you that your existing license will still work with any version of LightBurn for which your license term is valid, up until LightBurn version 1.7.00, forever. Prior releases will always be made available for download. Finally, your license will continue to be valid for future Windows and Mac OS releases covered by your license term.

If you are a Linux-only user who has recently purchased a license or renewal that is valid for a release of LightBurn after v1.7.00, please contact us for a refund.

Rest assured that we will be using the time gained by sunsetting Linux support to redouble our efforts at making better software for laser cutters, and beyond. We hope you will continue to utilize LightBurn on a supported operating system going forward, and we thank you for being a part of the LightBurn community.

Sincerely,

The LightBurn Software Team

Copyright © 2024 LightBurn Software. All rights reserved. "

I appreciate that there willing to refund recently bought liscences and all versions up to 1.7 forever instead of DRM bullshit (you gotta buy the newest subscription service) {insert cable guys from southpark} But if your rewriting the framework then why kill off linux??? They said there working on a native arm build for MacOS which knowing apple your gonna half to buy the new macbook cause the old one is old and apple needs your money. So its not anymore of a reason to kill linux

TLDR: there killing linux support because its less than 1% of there userbase and they spend more money and time maintaining the lightburn build.

1440
 
 

Welcome to the monthly update for openSUSE Tumbleweed for July 2024. Last month was busy with events like the Community Summit in Berlin and the openSUSE Conference. Both events were productive and well-received. Despite the busy schedule and follow on discussion from the conference about the Rebranding of the Project, a number of snapshots continued to roll out to users this month.

Stay tuned and tumble on!

Should readers desire more frequent information about snapshot updates, they are encouraged to subscribe to the openSUSE Factory mailing list.

New Features and Enhancements

  • Linux Kernel 6.9.9: This kernel introduces several important fixes and enhancements across various subsystems. Key updates include the introduction of devm_mutex_init() for mutex initialization in multiple components, addressing issues in the Hisilicon debugfs uninit process, and resolving shared IRQ handling in DRM Lima drivers. Fixes in the PowerPC architecture avoid nmi_enter/nmi_exit in real mode interrupts, while networking improvements prevent unnecessary BUG() calls in net/dql. Enhancements in WiFi drivers such as RTW89 include improved handling for 6 GHz channels. Updates in DRM/AMD drivers address multiple issues, from uninitialized variable warnings to ensuring proper timestamp initialization and memory management. The RISC-V architecture receives a fix for initial sample period values, and several BPF selftests see adjustments for better error detection. These updates collectively enhance system stability, performance, and security.
  • KDE Plasma 6.1.3: Discover now auto-handles Flatpak rebases from runtimes and properly uninstalls EOL refs without replacements. In Kglobalacceld, invalid keycodes are explicitly processed. Kpipewire introduces proper cleanup on deactivate and fixes thread handling for PipeWireSourceStream. KScreen now uses ContextualHelpButton from Kirigami, and Kscreenlocker adds a property to track past prompts. KWin sees numerous improvements: relaxed nightlight constraints, simplified Wayland popup handling, better input method windows, and enhanced screencast plugins. Plasma Mobile enhancements improve home screen interactions, translation issues, and swipe detection. Plasma Networkmanager and Plasma Workspace benefit from shared QQmlEngine and various bug fixes, including avatar image decoding and pointer warping on Wayland.
  • Frameworks 6.4.0: Attica updates its gitignore to include VS Code directories. Baloo reverts a QCoreApplication change and ports QML modules. Breeze Icons introduces a ColorScheme-Accent and fixes data-warning icons. KArchive now rejects tar files with negative sizes and fixes crashes with malformed files. KAuth and KBookmarks add VS Code directories to gitignore. KCalendarCore adds missing QtCore dependencies and QML bindings for calendar models. KIO improves systemd process handling and deprecates unused features. Kirigami enhances navigation and dialog components. KTextEditor adds a tool for testing JavaScript scripts and ensures even indent sizes, fixing multiple bugs.
  • KDE Gear 24.05.2: Akonadi-calendar adds missing change notifications. Dolphin updates Meta-Object Compiler generation. Filelight enables appx building and ensures hicolor icon presence while Itinerary fixes calendar permissions, corrupted notes, and the package introduces new extractors. Kdenlive addresses timeline, aspect ratio, and compilation issues. Okular fixes a crash with certain PDF actions.
  • Supermin 5.3.4: This update introduces several key enhancements, including support for OCaml 5 and kylinsecos. It improves package management by detecting dnf5 and omitting missing options. The update also refines OCaml compilation by using -output-complete-exe instead of -custom that fixes kernel filtering for the aarch64 architecture, and enables kernel uncompression on RISC-V. The update removes previously applied patches now included in the new tarball, helping to streamline the codebase and improve maintainability.
  • Checkpolicy 3.7: The latest update brings support for Classless Inter-Domain Routing notation in nodecon statements, enhancing SELinux policy definition capabilities. Error messages are now more descriptive, and error handling has been improved. Key bug fixes include handling unprintable tokens, avoiding garbage value assignments, freeing temporary bounds types and performing contiguous checks in host byte order.

Key Package Updates

  • NetworkManager 1.48.4: This update introduces support for matching Open vSwitch (OVS) system interfaces by MAC address, enhancing network interface management. Additionally, NetworkManager now considers the contents of /etc/hosts when determining the system hostname from reverse DNS lookups of configured interface addresses, improving hostname resolution accuracy. Subpackages updated include NetworkManager-bluetooth, NetworkManager-lang, NetworkManager-tui, NetworkManager-wwan, libnm0, and typelib-1_0-NM-1_0. These enhancements contribute to more robust and precise network configuration handling in Linux environments.
  • libguestfs 1.53.5: This update includes significant enhancements and fixes. The --chown parameter is now correctly split on the ':' character, and a new checksum command is supported. Detection for Circle Linux and support for the LoongArch architecture have been added, including file architecture translation fixes. The update allows nbd+unix:// URIs and reimplements GPT partition functions using sfdisk. DHCP configuration improvements and a new virt-customize --inject-blnsvr operation enhance usability. Deprecated features include the removal of gluster, sheepdog, and tftp drive support. New APIs such as findfs_partuuid and findfs_partlabel improve functionality, while inspection tools now resolve PARTUUID and PARTLABEL in /etc/fstab. These updates enhance compatibility, performance, and functionality across various environments.
  • glib2 2.80.4: The latest update backports key patches: mapping EADDRNOTAVAIL to G_IO_ERROR_CONNECTION_REFUSED, handling files larger than 4GB in g_file_load_contents(), and correcting GIR install locations and build race conditions. Additionally, improvements in gthreadedresolver ensure returned records are properly reference-counted in lookup_records().
  • ruby3.3 3.3.4: This release addresses a regression where dependencies were missing in the gemspec for some bundled gems such as net-pop, net-ftp, net-imap, and prime. Other fixes include preventing Warning.warn calls for disabled warnings, correcting memory allocation sizes in String.new(:capacity) and resolving string corruption issues.
  • libgcrypt 1.11.0: The latest update introduces several new interfaces and performance enhancements. New features include an API for Key Encapsulation Mechanism (KEM), support for algorithms like Streamlined NTRU Prime sntrup761, Kyber, and Classic McEliece, and various Key Derivation Functions (KDFs) including HKDF and X963KDF. Performance improvements feature optimized implementations for SM3, SM4, and other cryptographic operations on ARMv8/AArch64, PowerPC, and AVX2/AVX512 architectures. Other changes include various enhancements for constant time operations and deprecates the GCRYCTL_ENABLE_M_GUARD control code.

Bug Fixes

  • orc 0.4.39:

    • CVE-2024-40897 was solved with versions before 0.4.39, which had a buffer overflow vulnerability in orcparse.c.
  • java-21-openjdk 21.0.4.0:

  • ovmf 202402 had three months of CVE patches in its quarterly update.

  • Mozilla Firefox 128.0: This release fixes 16 CVEs. The most severe was CVE-2024-6604; this was a memory safety bug in Firefox 128, Firefox ESR 115.13, Thunderbird 128 and Thunderbird 115.13. These bugs showed evidence of memory corruption that potentially allowed arbitrary code execution.

  • ghostscript 10.03.1)

    • CVE-2024-33869 allowed bypassing restrictions via crafted PostScript documents.
    • CVE-2023-52722
    • CVE-2024-33870 allows access to arbitrary files via crafted PostScript documents.
    • CVE-2024-33871 allowed arbitrary code execution via crafted PostScript documents using custom Driver libraries in contrib/opvp/gdevopvp.c.
    • CVE-2024-29510 allowed memory corruption and SAFER sandbox bypass via format string injection in a uniprint device.
  • xwayland 24.1.1 3:

    • CVE-2024-31080 had a vulnerability that could allow attackers to trigger the X server to read and transmit heap memory values, leading to a crash.
    • CVE-2024-31081 could cause memory leakage and segmentation faults, leading to a crash.
    • CVE-2024-31083 allowed arbitrary code execution by authenticated attackers through specially crafted requests.
  • libreoffice 24.2.5.2:

    • CVE-2024-5261 allows fetching remote resources without proper security checks.
  • GTK3 3.24.43:

    • CVE-2024-6655 allowed a library injection into a GTK application from the current working directory under certain conditions.
  • netpbm 11.7.0:

    • CVE-2024-38526: doc, which provides API documentation for Python projects, had a vulnerability where pdoc --math linked to malicious JavaScript files from polyfill.io.

Conclusion

The month of July 2024 was marked by significant updates, security fixes and enhancements. The Linux Kernel 6.9.9 update introduced several key fixes and improvements across various subsystems, enhancing overall stability and performance. KDE Plasma 6.1.3 brought numerous UI improvements and better handling of Flatpak rebases. The updates to Frameworks 6.4.0 and KDE Gear 24.05.2 provided additional enhancements and bug fixes, improving user experience and system reliability. Critical security vulnerabilities were addressed in various packages, including Firefox, ghostscript, and xwayland, ensuring Tumbleweed remains secure, efficient, and feature-rich for all users. Additionally, the Aeon team announced the release of Aeon Desktop to Release Candidate 3 status that came from the release of a Tumbleweed snapshot last week.

For those Tumbleweed users who want to contribute or want to engage with detailed technological discussions, subscribe to the openSUSE Factory mailing list . The openSUSE team encourages users to continue participating through bug reports, feature suggestions and discussions.

Contributing to openSUSE Tumbleweed

Your contributions and feedback make openSUSE Tumbleweed better with every update. Whether reporting bugs, suggesting features, or participating in community discussions, your involvement is highly valued.

More Information about openSUSE:

Official

Fediverse

(Image made with DALL-E)

1441
 
 

Had to mount a usb drive and run it from there, because I don't have sudo

1442
53
submitted 6 months ago* (last edited 6 months ago) by MyNameIsRichard@lemmy.ml to c/linux@lemmy.ml
 
 

It's not my work in KDE, it's a blog I posted

1443
 
 

I am going to ask if I may use linux for work. We are using windows but there is nothing that couldn't be done on linux. Privately, I am mainly a fedora user but I'd be happy with any OS and DE or wm. What do I need to look out for when I suggest an OS? What does a computer/ linux/DE need in order to be ready for enterprise workstation? Will I only have a user and no sudo rights? May I install all flatpak apps? Does the admin have to be able to remote ssh?

1444
51
submitted 6 months ago* (last edited 6 months ago) by B0g3nNutz3r@lemmy.ml to c/linux@lemmy.ml
 
 

Hi, I have never build a PC before, that is why I am asking you for your help and suggestions. I have informed (or misinformed) myself about a few aspects of building a PC. I will give my reasoning why I chose each part, and let you decide why I am wrong.

Usage:

The goal of this build is to create a Gaming PC which can play most games at least at lower resolutions and at sufficient frame rate. I plan to build this PC with future software requirements in mind, to reduce e-waste and to leave room for possible upgrades. This PC should support Coreboot to allow for firmware updates, even after the official firmware support has stopped. This machine will run Linux as the main OS and probably Dasharo as the Coreboot-distribution. The main use is playing games and emulation, but I also intend to use it for virtualisation.

Components:

  • Motherboard: ~~Pro Z790-P Wifi (DDR5 Variant)~~
  • CPU: ~~Intel Core i5-13600KF (Alder/Raptor_Lake-S)~~
  • CPU-Cooler: ~~Scythe Fuma 3 67.62 CFM CPU Cooler (4-30 dB)~~
  • GPU: XFX Speedster QICK 309 Radeon RX 7600 XT 16 GB Video Card
  • RAM: G.Skill Ripjaws S5 32 GB (2 x 16 GB) DDR5 5600 (CL 28)
  • Storage: Samsung 980 Pro 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive
  • PSU: SeaSonic FOCUS Plus 650 Gold 650 W 80+ Gold Certified Fully Modular ATX Power Supply

Why did I choose those parts?

Motherboard:

  • The main two reasons why I choose the Z790-P are, that the motherboard needs to support Coreboot and that it is DDR5 compatible.
  • I could not care less if the motherboard supports Wifi or Bluetooth, since the PC is not going to leave my desk, but I will not complain for having it.

CPU:

  • ~~Since I already decided on a motherboard, the manufacturer decided the CPU-brand for me. In this case Intel. The CPU-socket only allows for microarchitectures Alder and Raptor Lake-S, so my choice is limited.~~ Intels 13th and 14th generation CPUs have many reported issues. Intel reported that many of those issues are due to faulty voltage configuration in the motherboard bios, which cause the CPU to degrade at an accelerated pace. They are due to release a microcode patch mid-august, which reportedly fixes the issue without a significant performance loss. Obviously, this patch will not fix already damaged chips. Another problem is, that they had some issues while manufacturing these chips in 2023, which caused oxidation and therefore degradation.
  • Generally you want more from everything, cores, threads and core clock, except power usage.
  • You also have the choice between a CPU with integrated graphics or without. ~~To save the environment and my bank account I will choose one without it. For Intel that is every CPU with the "F" designation.~~ As thingsiplay and felsiq have pointed out there can appear several issues, when building a system without a integrated GPU. One which is, that it becomes harder to debug issues, since you can not just unplug your GPU, to test if the GPU drivers are at fault.

CPU-Cooler:

  • Check if the TDP matches your CPU, e.g. >=125 W for 125 W.
  • Check if it matches your motherboard and case, i.e. everything fits.
  • Lastly, make sure it is not to loud.

GPU:

  • Main OS is Linux, so I will spare myself the pain and choose AMD over Nvidia.
  • More demanding games use more video memory. I have read that 8 GB often is not enough anymore.

RAM:

  • Virtualisation often needs a lot of resources i.e. RAM.
  • For optimal performance your RAM-speed should match your CPU. Any more and you waste money, any less and you create a bottleneck. Since the i5 only has 5600 MT/s, any more is wasted.

Storage:

  • Since most games today use around 60-150 GB, this PC will need a lot of storage. About thirteen 150 GB games can be stored on a 2 TB drive. I hope this will suffice.

Power supply unit:

  • Deciding factors are form factor and power. You can not use your PSU, if it either does not fit in your PC-case or does not have enough juice to power your other components.
  • Your PSU should have a few more 20-30 % clearance in case of a spike. I think a 650 W PSU should be enough for a workload of 490 W idle. Please, correct me, if I am wrong.
  • Some people recommend buying a PSU with more power than needed to allow for upgrades with higher power usage, but apparently the PSU will not run efficiently in this case. I have read that a PSU should be most efficient at idle hardware usage to maximize power savings. E.g. do not buy a 1000 W PSU, when you only use around 400 W at idle.
  • Also important are the +12V-rails. You should make sure the supply at least 24 A. Lastly you should check which power plugs the PSU will/can use.

Since I plan something special for the PC-case, it will not be part of this post. I hope this post can be used by others in the future, as a reference for building a Linux PC.

PS: This is my first post on lemmy. I am sorry for any formatting errors. I hope the post is legible.

Edit:

  • added links for explanation
  • fixed some grammatical errors
  • added suggestions from the comments

It is getting late here. I will look into a substitute for intel tomorrow (8 hours from the latest edit) and add this here.

1445
 
 

I'm new to Linux; I fled from Windows in the wake of 10-11 ever-accelerating stream of bullshit.

Anyway, I have major muscle memory for MRU window and tab switching with alt-tab and ctrl-tab. Edit for clarity: I also want to be able to navigate to the Nth most recent tab by holding Ctrl and pressing Tab N times, then releasing Ctrl. I use it all the time to switch windows, switch browser tabs, and switch IDE tabs. In Windows, I could also switch Terminal tabs in MRU order, and I miss this in Linux. My distro (Mint) comes with gnome-terminal, which as far as I can tell doesn't expose MRU switching as an option.

Is there an alternative terminal that does support this, ideally with ctrl-tab? Alternatively, if you use MRU switching in other contexts but not in your terminal, what do you use instead?

UPDATE

After installing many different terminals and poring through documentation of widely varying quality, I have found at least two terminal emulators that just do what I want, out of the box: Konsole and QTerminal. I'll dive deeper into the relative merits of these two for now. If you know of another terminal that does what I described, or any crucial info about either Konsole or QTerminal, please let me know!

1446
 
 

Hey guys, Ive been running PoP OS for more than a year without any big issues until today. I did a reboot (with checkmark to do updates) and after that PC got stuck at MBO logo. After reading a bit I rebooted again and was holding space button -> chose old kernel and PC started. One screen was just black while the other one was using super low resolution. I went to pop_shop and downgraded video drivers from 555 to 470 and eddited /boot/efi/loader/loader.conf to default Pop_OS-oldkern rebooted and viola its booting fine. Then I noticed my app PrusaSlicer (flatpak) doesnt show the Platter (main tab) and it crashes if I try to slice. I read it could be graphic driver related. Then I was struggling to update drivers to 555 again and eventually succeeded using sudo apt remove ~nnvidia and sudo apt install pop-desktop system76-driver-nvidia. I believe Im using old kernel and new gpu drivers, but still have an issue with PrusaSlicer. Anyone know a solution?

When I had only one screen working (at low resolution) PrusaSlicer was working fine...reinstall doesnt help at all

Thx in advance

1447
 
 

Apart from the obvious nautical themed solutions, are there any ways around streaming services not allowing HD video playback on Linux? Prime video is the one I’ve noticed it with the most, I haven’t tried Disney plus yet but I’m expecting it to be similar. I’ve been dual booting for a while now and this is the main thing keeping me on Windows at the moment.

1448
 
 

I have to use Windows on my work computer and I am finding it hard to get FOSS applications on Windows that can do stuff like

  1. Record a video (like SimpleVideoRecorder does)
  2. Take a screenshot (there's snip, but it isn't very customizable)
  3. Unzip .zip files

Just the routine things I used to take for granted on Linux, so I was wondering if there was a FOSS app store for Windows

And it would be very helpful if someone could suggest alternative for

  1. SimpleVideoRecorder
  2. Archive Manager

Even the apps I installed for these things either had ads or asked me for payment to record more than 2 minutes of video, I am pretty sure there are FOSS apps to do these things out there, but I don't know where :')

PS: To everyone who has tried to help, thank you very much. I was feeling guilty for not replying to most of you, so I thought I would reply to all of ya, but funnily enough, lemmy had had enough of my gratitude!

1449
 
 

This is something that perplexed me a few years ago with Flash Forth on a PIC18/PIC24/Arduino Uno. I was using the Python serial emulator S-Term because it is simple in the source code and worked. I really wanted a way to load more structured Words into the FF dictionary with bookmarks in a way that made sense structurally. That lead to a desire to execute code from the uC on the host system, but I never wrapped my head around how to do this in practice.

As a random simple example, let's say I set up an interrupt based on the internal temperature sensor of the PIC18. Once triggered the uC must call a Python script on the host system and this script defines a new FF word on the uC by defining the Word in the interpreter.

How do you connect these dots to make this work at the simplest 'hello world' level? I tried modifying S-Term at one point, but I didn't get anywhere useful with my efforts.

1450
 
 

I'd appreciate a sanity check for what I'm planning to do later today.

I bought a minisforum um890 recently. It has 2 m.2 nvme ports. I have the system running nobara off one drive currently, the other is unfilled. The drive has file system encryption enabled.

I backed up the root folder of my system to a 128gb usb using backintime. I enabled encryption when asked.

I plan to install a second ssd, enable raid 0 striping on the 2 drives in bios, boot from a live USB, then install nobara onto the new raid storage.

After that, i should be able to reinstall backintime then restore my backup right?

view more: ‹ prev next ›