Molecular0079

joined 1 year ago
[–] Molecular0079@lemmy.world 1 points 1 week ago (1 children)

Some people have reported that installing the 32-bit version of mesa libva drivers makes it work for them? Might be worth a shot.

[–] Molecular0079@lemmy.world 1 points 1 month ago

It is not necessary to add the nvidia stuff to initramfs. The important part is nvidia_drm.modeset=1.

[–] Molecular0079@lemmy.world 3 points 1 month ago

Yeah, in a Reddit comment, Hector Martin himself said that the memory bandwidth on the Apple SIlicon GPU is so big that any potential performance problems due to TBDR vs IMR are basically insignificant.

...which is a funny fact because I had another Reddit user swear up and down that TBDR was a big problem and that's why Apple decided not to support Vulkan and instead is forcing everyone to go Metal.

[–] Molecular0079@lemmy.world 9 points 1 month ago

Nowadays, I mostly don't even care about compatibility issues anymore and just expect a game to work in Linux, which is just freaking cool. Obviously, some competitive MP games are off the table due to anti-cheat, but that isn't my main gaming category nowadays so it works out.

[–] Molecular0079@lemmy.world 4 points 1 month ago (2 children)

I've heard something about Apple Silicon GPUs being tile-based and not immediate mode, which means the Vulkan API is different compared to regular PCs. How has this been addressed in the Vulkan driver?

[–] Molecular0079@lemmy.world 4 points 1 month ago

Huge fucking deal, especially for Nvidia users, but it is great for the entire ecosystem. Other OSes have had explicit sync for ages, so it is great for Linux to finally catch up in this regard.

[–] Molecular0079@lemmy.world 10 points 1 month ago

You're correct. While the stable version of KDE Wayland is usable right now with the new driver with no flickering issues, etc., it technically does not have the necessary patches needed for explicit sync. Nvidia has put some workarounds in the 555 driver code to prevent flickering without explicit sync, but they're slower code paths.

The AUR has a package called kwin-explicit-sync, which is just the latest stable kwin with the explicit sync patches applied. This combined with the 555 drivers makes explicit sync work, finally solving the flickering issues in a fast performant way.

I've tested with both kwin and kwin-explicit-sync and the latter has dramatically improved input latency. I am basically daily driving Wayland now and it is awesome.

[–] Molecular0079@lemmy.world 9 points 2 months ago (1 children)

The task manager is just another widget on the panel. Right click anywhere on the panel (except on the tray icons, those are special), and click Enter edit mode. Then you can drag the task manager along the panel and configure it how you like.

[–] Molecular0079@lemmy.world 1 points 3 months ago

Yeah that explains why you're not seeing the issue. Seems like drkonqi activates on logout and holds up the entire process.

[–] Molecular0079@lemmy.world 1 points 3 months ago (1 children)

I do not have this file at all. I think yours was a different issue.

[–] Molecular0079@lemmy.world 2 points 3 months ago

It will occasionally work so you may have just gotten lucky. It seems like its a drkonqi issue, see the linked upstream bug report for the workarounds. You can either uninstall drkonqi or just mask the systemd service for now.

[–] Molecular0079@lemmy.world 1 points 3 months ago (2 children)

Do you have drkonqi installed? According to this thread, it's an issue with drkonqi.

https://bbs.archlinux.org/viewtopic.php?pid=2163139#p2163139

 

On one of my machines, I am completely unable to log out. The behavior is slightly different depending on whether I am in Wayland or X11.

Wayland

  1. Clicking log out and then OK in the log out window brings me back to the desktop.
  2. Doing this again does the same thing
  3. Clicking log out for a third time does nothing

X11

  1. Clicking log out will lead me to a black screen with just my mouse cursor.

In my journalctl logs, I see:

Apr 03 21:52:46 arch-nas systemd[1]: Stopping User Runtime Directory /run/user/972...
Apr 03 21:52:46 arch-nas systemd[1]: run-user-972.mount: Deactivated successfully.
Apr 03 21:52:46 arch-nas systemd[1]: user-runtime-dir@972.service: Deactivated successfully.
Apr 03 21:52:46 arch-nas systemd[1]: Stopped User Runtime Directory /run/user/972.
Apr 03 21:52:46 arch-nas systemd[1]: Removed slice User Slice of UID 972.
Apr 03 21:52:46 arch-nas systemd[1]: user-972.slice: Consumed 1.564s CPU time.
Apr 03 21:52:47 arch-nas systemd[1]: dbus-:1.2-org.kde.kded.smart@0.service: Deactivated successfully.
Apr 03 21:52:47 arch-nas systemd[1]: dbus-:1.2-org.kde.powerdevil.discretegpuhelper@0.service: Deactivated successfully.
Apr 03 21:52:47 arch-nas systemd[1]: dbus-:1.2-org.kde.powerdevil.backlighthelper@0.service: Deactivated successfully.
Apr 03 21:52:48 arch-nas systemd[1]: dbus-:1.2-org.kde.powerdevil.chargethresholdhelper@0.service: Deactivated successfully.
Apr 03 21:52:54 arch-nas systemd[4500]: Created slice Slice /app/dbus-:1.2-org.kde.LogoutPrompt.
Apr 03 21:52:54 arch-nas systemd[4500]: Started dbus-:1.2-org.kde.LogoutPrompt@0.service.
Apr 03 21:52:54 arch-nas ksmserver-logout-greeter[5553]: qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
Apr 03 21:52:54 arch-nas ksmserver-logout-greeter[5553]: kf.windowsystem: static bool KX11Extras::compositingActive() may only be used on X11
Apr 03 21:52:54 arch-nas plasmashell[5079]: qt.qpa.wayland: eglSwapBuffers failed with 0x300d, surface: 0x0
Apr 03 21:52:55 arch-nas systemd[4500]: Created slice Slice /app/dbus-:1.2-org.kde.Shutdown.
Apr 03 21:52:55 arch-nas systemd[4500]: Started dbus-:1.2-org.kde.Shutdown@0.service.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target plasma-workspace-wayland.target.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target KDE Plasma Workspace.
Apr 03 21:52:55 arch-nas systemd[4500]: Requested transaction contradicts existing jobs: Transaction for  is destructive (drkonqi-coredump-pickup.service has 'start' job queued, but 'stop' is included in transaction).
Apr 03 21:52:55 arch-nas systemd[4500]: graphical-session.target: Failed to enqueue stop job, ignoring: Transaction for graphical-session.target/stop is destructive (drkonqi-coredump-pickup.service has 'start' job queued, but 'stop' is included in transaction).
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target KDE Plasma Workspace Core.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target Startup of XDG autostart applications.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target Session services which should run early before the graphical session is brought up.
Apr 03 21:52:55 arch-nas systemd[4500]: dbus-:1.2-org.kde.LogoutPrompt@0.service: Main process exited, code=exited, status=1/FAILURE
Apr 03 21:52:55 arch-nas systemd[4500]: dbus-:1.2-org.kde.LogoutPrompt@0.service: Failed with result 'exit-code'.graphical-session.target/stop

I've filed an upstream bug for this but I was wondering if anyone else here was also experiencing the same issue.

 

Currently, I have SSH, VNC, and Cockpit setup on my home NAS, but I have run into situations where I lose remote access because I did something stupid to the network connection or some update broke the boot process, causing it to get stuck in the BIOS or bootloader.

I am looking for a separate device that will allow me to not only access the NAS as if I had another keyboard, mouse, and monitor present, but also let's me power cycle in the case of extreme situations (hard freeze, etc.). Some googling has turned up the term KVM-over-IP, but I was wondering if any of you guys have any trustworthy recommendations.

 

cross-posted from: https://lemmy.world/post/4930979

Bcachefs making progress towards getting included in the kernel. My dream of having a Linux native RAID5 capable filesystem is getting closer to reality.

 

Bcachefs making progress towards getting included in the kernel. My dream of having a Linux native RAID5 capable filesystem is getting closer to reality.

 

Patch 2 seems to have drastically slowed down the Vulkan Renderer. Before I was able to get 80-110FPS in the Druid Grove, but now I am only getting 50fps. DX11 seems fine though, but I prefer using Vulkan since I am on Linux.

Arch Linux, Kernel 6.4.12

Ryzen 3900x

Nvidia 3090 w/ 535.104.05 drivers

Latest Proton Experimental

 

I am using one of the official Nextcloud docker-compose files to setup an instance behind a SWAG reverse proxy. SWAG is handling SSL and forwarding requests to Nextcloud on port 80 over a Docker network. Whenever I go to the Overview tab in the Admin settings, I see this security warning:

    The "X-Robots-Tag" HTTP header is not set to "noindex, nofollow". This is a potential security or privacy risk, as it is recommended to adjust this setting accordingly.

I have X-Robots-Tag set in SWAG. Is it safe to ignore this warning? I am assuming that Nextcloud is complaining about this because it still thinks its communicating over an insecured port 80 and not aware of the fact that its only talking via SWAG. Maybe I am wrong though. I wanted to double check and see if there was anything else I needed to do to secure my instance.

SOLVED: Turns out Nextcloud is just picky with what's in X-Robots-Tag. I had set it to SWAG's recommended setting of noindex, nofollow, nosnippet, noarchive, but Nextcloud expects noindex, nofollow.

 

cross-posted from: https://lemmy.world/post/3989163

I've been messing around with podman in Arch and porting my self-hosted services over to it. However, it's been finicky and I am wondering if anybody here could help me out with a few things.

  1. Some of my containers aren't getting properly started up by podman-restart.service on system reboot. I realized they were the ones that depended on my slow external BTRFS drive. Currently its mounted with x-systemd.automount,x-systemd.device-timeout=5 so that it doesn't hang up the boot if I disconnect it, but it seems like Podman doesn't like this. If I remove the systemd options the containers properly boot up automatically, but I risk boot hangs if the drive ever gets disconnected from my system. I have already tried x-systemd.before=podman-restart.service and x-systemd.required-by=podman-restart.service, and even tried increasing the device-timeout to no avail.

When it attempts to start the container, I see this in journalctl:

Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: libpod-742b4595dbb1ce604440d8c867e72864d5d4ce1f2517ed111fa849e59a608869.scope: Deactivated successfully.
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : runtime stderr: error stat'ing file `/external/share`: Too many levels of symbolic links
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : Failed to create container: exit status 1
  1. When I shutdown my system, it has to wait for 90 seconds for libcrun and libpod-conmon-.scope to timeout. Any idea what's causing this? This delay gets pretty annoying especially on an Arch system since I am constantly restarting due to updates.

All the containers are started using docker-compose with podman-docker if that's relevant.

Any help appreciated!

EDIT: So it seems like podman really doesn't like systemd automount. Switching to nofail, x-systemd.before=podman-restart.service seems like a decent workaround if anyone's interested.

 

I've been messing around with podman in Arch and porting my self-hosted services over to it. However, it's been finicky and I am wondering if anybody here could help me out with a few things.

  1. Some of my containers aren't getting properly started up by podman-restart.service on system reboot. I realized they were the ones that depended on my slow external BTRFS drive. Currently its mounted with x-systemd.automount,x-systemd.device-timeout=5 so that it doesn't hang up the boot if I disconnect it, but it seems like Podman doesn't like this. If I remove the systemd options the containers properly boot up automatically, but I risk boot hangs if the drive ever gets disconnected from my system. I have already tried x-systemd.before=podman-restart.service and x-systemd.required-by=podman-restart.service, and even tried increasing the device-timeout to no avail.

When it attempts to start the container, I see this in journalctl:

Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: libpod-742b4595dbb1ce604440d8c867e72864d5d4ce1f2517ed111fa849e59a608869.scope: Deactivated successfully.
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : runtime stderr: error stat'ing file `/external/share`: Too many levels of symbolic links
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : Failed to create container: exit status 1
  1. When I shutdown my system, it has to wait for 90 seconds for libcrun and libpod-conmon-.scope to timeout. Any idea what's causing this? This delay gets pretty annoying especially on an Arch system since I am constantly restarting due to updates.

All the containers are started using docker-compose with podman-docker if that's relevant.

Any help appreciated!

EDIT: So it seems like podman really doesn't like systemd automount. Switching to nofail, x-systemd.before=podman-restart.service seems like a decent workaround if anyone's interested.

 

cross-posted from: https://lemmy.world/post/3754933

While experimenting with ProtonVPN's Wireguard configs, I realized that my real IPv6 address was leaking while IPv4 was correctly going through the tunnel. How do I prevent this from happening?

I've already tried adding ::/0 to the AllowedIPs option and IPv6 is listed as disabled in the NetworkManager profile.

 

While experimenting with ProtonVPN's Wireguard configs, I realized that my real IPv6 address was leaking while IPv4 was correctly going through the tunnel. How do I prevent this from happening?

I've already tried adding ::/0 to the AllowedIPs option and IPv6 is listed as disabled in the NetworkManager profile.

 

So I have been running into a weird issue lately where if I disconnect a Bluetooth audio device, it will remain visible in the KDE audio mixer. Reconnecting the audio device then adds a duplicate entry and the keyboard volume control for it is completely broken. It stays at the same volume. This was working just fine about a week ago and I've already downgraded pipewire, kpipewire, bluedevil, and plasma-pa to no avail. Nothing shows up in the logs, so I don't know exactly what's causing this bug.

Anyone else experiencing the same thing?

Arch Linux Kernel 6.4.8-arch1-1 Pipewire 0.3.77-1 KDE Plasma: 5.27.7 KDE Frameworks Version: 5.108.0 Qt 5.15.10

EDIT: Seems like simply changing the Bluetooth A2DP audio profile causes this issue as well. I have Bluetooth earbuds that have both AAC and SBC modes and toggling between them just creates more and more duplicate devices with the same name.

THE FIX: Seems like pipewire-pulse 0.3.77 was the culprit after all. Downgrade it to pipewire-pulse 0.3.76 and then do a systemctl --user restart pipewire-pulse to workaround this issue.

EDIT 2: pipewire-pulse 0.3.77-2 has the patch backported. Feel free to update to latest version in Arch repos.

Relevant bug report: https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/3414

 

I am subscribed to this community from Lemmy and the icon is broken. I don't even see it on ~~kbin.social~~ fedia.io. Let's get our favorite mascot back!

Here's what it looks like from lemmy.world:

EDIT: If the icon is working for you, either your browser or your instance may be caching the icon and thereby hiding the issue.

view more: next ›