pe1uca

joined 1 year ago
[–] pe1uca@lemmy.pe1uca.dev 3 points 1 hour ago

As long as you mean a landslide win by a party lead by a guy who said a religious charm was better during the pandemic than any medication, vaccine or any countermeasure, a guy who said "women deserve to go to heaven" when asked if he's feminist, a guy who has said all the power should be concentrated in the government, not in independent entities, a guy who said eolic turbines make the landscape ugly, and who made two big investments in refineries during his administration... Yeah, it's a good thing to see the left-wing in the power.

[–] pe1uca@lemmy.pe1uca.dev 11 points 2 hours ago (3 children)

She IS AMLO's administration, there was no word from her before he said something about anything during her campaign.

AMLO had said since the beginning of his term he was going to disappear from the public to his state after today, but earlier this year he said he would come back if the circumstances demanded it, and just last month I think he said he will stay around.

I don't wish her luck, I wish México luck.

[–] pe1uca@lemmy.pe1uca.dev 7 points 17 hours ago

I could even go further into saying: always test every change you make, do not assume the change has been made because you updated a file.

[–] pe1uca@lemmy.pe1uca.dev 4 points 2 weeks ago* (last edited 2 weeks ago)

I use rclone and duplicati depending on the needs of the backup.

For long term I use duplicati, it has a GUI and you can upload it to several places (mines are spread between e2 and drive).
You configure the backend, password for encryption, schedule, and version retention.

rclone, with the crypt submodule, you use it to mount your backups as am external drive, so you need to manually handle the actual copy of the data into it, plus versioning and retention.

[–] pe1uca@lemmy.pe1uca.dev 3 points 2 weeks ago

Those are silent, they're there for history reasons.

rit de façon maniaque en français

[–] pe1uca@lemmy.pe1uca.dev 64 points 2 weeks ago (6 children)

Well, the issue will be developers of other apps would force us to re-google since any build of the app would be useless unless installed from the play store...

[–] pe1uca@lemmy.pe1uca.dev 14 points 3 weeks ago

a console has better optimisation for lower price.

Something else to have in mind, some times they're like a printer, the device is relatively cheap but you have to buy other stuff to actually have it working.

In PC you can find several places to buy and download games (even when it feels like only one or two exist), in console you only have the manufacturer.
In PC as long as you have internet you can play multiplayer, in console you have to subscribe to their online services.

[–] pe1uca@lemmy.pe1uca.dev 2 points 3 weeks ago (1 children)

I can't give you the technical explanation, but it works.
My Caddyfile only something like this

@forgejo host forgejo.pe1uca
handle @forgejo {
	reverse_proxy :8000
}

and everything else has worked properly cloning via ssh with git@forgejo.pe1uca:pe1uca/my_repo.git

My guess is git only needs the host to resolve the IP and then connects to the port directly.

[–] pe1uca@lemmy.pe1uca.dev 5 points 3 weeks ago

One of my best friends introduced me to this series back in MH4U for the 3DS.
As someone mentioned in other comment, these games are definitely not newbie friendly haha. I started it and left it after a few missions, I don't remember what rank I was, but definitely the starting village. Afterwards we finally got time to play and he mocked me since my character had less armor than his palico :D
We played more often and he helped me reach higher ranks until G-rank.

Each game has had a different kind of end game.
For MH4U were the guild quests which were randomly generated, I loved this, it made the game not feel like a total grind, but it only made it feel like that, because it really was a grind to both get the correct quest and level it up to get the relics you wanted.

The one I enjoyed the least was MHGen/MHGU because there's no end game loop, once you reach G-rank the game doesn't have anything else to offer, so you can just grind the same missions you already have. Of course this can be considered an end game loop since maxing your armor and weapons takes a long time (and IIRC some older fans mentioned this was ad-hoc with the theme of remembering old games since they where like that).

For MHW were the investigations which felt a bit like MH4U guild questions but without the random map.
The only downside of this game and the Iceborn expansion was the game as a service aspect, you could only access some quests on some days of the week, you had to connect to the internet to get them, and also one of the last bosses is tied to multiplayer, which if you have bad internet or only time for a single quest is impossible to properly finish.

I've bought each game. Around 200 minimum in each one. IIRC 450+ in MH4U and around 500 in MHW (mostly because it's harder to pause in PS4). MHRise/Sunbreak

MHRise is one of the most relaxing ones with the sunbreak expansion since you can take NCPs on all missions, they help a lot to de-aggro the monsters and enjoy the hunt.

I was with some friends from work when the trailer for MHW released and we literally screamed when we realized it was an MH game haha.

The only change they've made between games that I found really annoying was to the hunting horn. It was really fun to have to adapt your hunt to each horn's songs and keep track of what buffs were active and which ones you needed to re-apply (in reality you always rotated your songs over and over so you never ran out of your buffs).
But in Rise each song now is X -> X, A -> A, and X+A -> X+A, there's no combinations.
Every hunting horn only has 3 songs, previously some horns could have up to 5.
When you play a song twice the buff applied goes up a level, well, in Rise they made it a single attack to play all your songs twice.
It feels like they tried to simplify the weapon but two teams got in charge of providing ideas and they implemented both solutions, which made the weapon have no depth at all.
Also, previously you felt like the super support playing hunting horn, each time you applied a buff a messages appeared showing the buff you applied. Yeah, it was kind of spammy, but it felt nice having a hunting horn on the hunt.
In Rise they decided to only display a message the first time you apply the buff and that's it, so if you re-apply it there's nothing, even when you keep buffing your team. Ah, but if you use bow the arc shot does spam the buff message, so you feel less than a support than the bow :/

Due to work I haven't followed all the news of MHWilds, but I'll definitely buy it.


For the next posts my recommendations would be the series Sniper elite, Mario and Luigi, Pokemon mystery dungeon, and Disgaea.
(Maybe also another theme of posts could be genre/mechanic, like tactics games or colony management in general)

[–] pe1uca@lemmy.pe1uca.dev 9 points 3 weeks ago (1 children)

Ohhh! Now I understand!

Yeah, then that's an issue on mastodon.
I mentioned some time ago, the fact that mastodon and Lemmy use the same protocol is annoying, because the experiences are different, so it causes a lot of issues :/

[–] pe1uca@lemmy.pe1uca.dev 11 points 3 weeks ago (13 children)

Unless lemmy devs have changed something since last year, this shouldn't be the case, there's a bug in there.

All interactions are recived by the instance hosting the community, and that instance is responsible for broadcasting that interaction to each instance where a user subscribed to it is hosted.
So, mastodon is only responsible for sending the upvote to feddit.dk and then feddit.dk to all other instances.

[–] pe1uca@lemmy.pe1uca.dev 4 points 3 weeks ago (2 children)

I'm not saying to delete, I'm saying for the file system to save space by something similar to deduping.
If I understand correctly, deduping works by using the same data blocks for similar files, so there's no actual data loss.

 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

 

I was using SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS();
But this has been deprecated https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_found-rows

The recommended way now is first to query with limit and then again without it selecting count(*).
My query is a bit complex and joins a couple of tables with a large number of records, which makes each select take up to 4 seconds, so my process now takes double the time compared to as I just keep using found rows.

How can I go back to just running the select a single time and still getting the total number of rows found without the limit?

 

cross-posted from: https://lemmy.pe1uca.dev/post/1512941

I'm trying to configure some NFC tags to automatically open an app, which is easy, just have to type the package name.
But I'm wondering how I can launch the app in a specific activity.

Specifically when I search for FitoTrack in my phone I get the option to launch the app directly into the workout I want to track, so I don't have to launch the app, click the FAB, click "Record workout" and then select the workout.
So I want to have a tag which will automatically launch this app into a specific workout.

How can I know what's the data I need to put into the tag to do this?

Probably looking at the code will give me the answer, but this won't apply to closed source apps, so is there a way to get all the ways all my installed apps can be launched?

 

I'm trying to configure some NFC tags to automatically open an app, which is easy, just have to type the package name.
But I'm wondering how I can launch the app in a specific activity.

Specifically when I search for FitoTrack in my phone I get the option to launch the app directly into the workout I want to track, so I don't have to launch the app, click the FAB, click "Record workout" and then select the workout.
So I want to have a tag which will automatically launch this app into a specific workout.

How can I know what's the data I need to put into the tag to do this?

Probably looking at the code will give me the answer, but this won't apply to closed source apps, so is there a way to get all the ways all my installed apps can be launched?

 

I'm using https://github.com/rhasspy/piper mostly to create some audiobooks and read some posts/news, but the voices available are not always comfortable to listen to.

Do you guys have any recommendation for a voice changer to process these audio files?
Preferably it'll have a CLI so I can include it in my pipeline to process RSS feeds, but I don't mind having to work through an UI.
Bonus points if it can process the audio streams.

 

cross-posted from: https://lemmy.pe1uca.dev/post/1434359

I was trying to debug an issue I have connecting to a NAS, so I was checking the logs of UFW and found out there are a lot of connections being blocked from my chromecast HD (AndroidTV) on different ports via the local IP.

Sometimes I use jellyfin, but that's over tailscale, so there shouldn't be any traffic over local IP, just over tailscale's IP.
But shouldn't have traffic right now since I wasn't using it and didn't have tailscale on.

The ports seem random, just sometimes they are tried two times back to back, but afterwards another random port is tried to be accessed.

After seeing this I enabled UFW in my daily machine and the same type of logs showed up.

So, do you guys know what could be happening here?
Why is chromecast trying to access random ports on devices in the same network?

 

cross-posted from: https://lemmy.pe1uca.dev/post/1434359

I was trying to debug an issue I have connecting to a NAS, so I was checking the logs of UFW and found out there are a lot of connections being blocked from my chromecast HD (AndroidTV) on different ports via the local IP.

Sometimes I use jellyfin, but that's over tailscale, so there shouldn't be any traffic over local IP, just over tailscale's IP.
But shouldn't have traffic right now since I wasn't using it and didn't have tailscale on.

The ports seem random, just sometimes they are tried two times back to back, but afterwards another random port is tried to be accessed.

After seeing this I enabled UFW in my daily machine and the same type of logs showed up.

So, do you guys know what could be happening here?
Why is chromecast trying to access random ports on devices in the same network?

 

I've only used ufw and just now I had to run this command to fix an issue with docker.
sudo iptables -I INPUT -i docker0 -j ACCEPT
I don't know why I had to run this to make curl work.

So, what did I exactly just do?
This is behind my house router which already has reject input from wan, so I'm guessing it's fine, right?

I'm asking since the image I'm running at home I was previously running it in a VPS which has a public IP and this makes me wonder if I have something open there without knowing :/

ufw is configured to deny all incoming, but I learnt docker by passes this if you configure the ports like 8080:8080 instead of 127.0.0.1:8080:8080. And I confirmed it by accessing the ip and port.

 

I mean, the price of the product is the same, I'm taking a loan for the duration of the credit but paying no interest?
What's the catch?
I can keep my money making a bit of interest instead of giving it right away and without increasing the price of what I was already planning to buy. When or why wouldn't I choose 0% credits?

 

I'm looking at my library and I'm wondering if I should process some of it to reduce the size of some files.

There are some movies in 720p that are 1.6~1.9GB each. And then there are some at the same resolution but are 2.5GB.
I even have some in 1080p which are just 2GB.
I only have two movies in 4k, one is 3.4GB and the other is 36.2GB (can't really tell the detail difference since I don't have 4k displays)

And then there's an anime I have twice at the same resolution, one set of files are around 669~671MB, the other set 191 each (although in this the quality is kind of noticeable while playing them, as opposed to the other files I extract some frames)

What would you do? what's your target size for movies and series? What bitrate do you go for in which codec?

Not sure if it's kind of blasphemy in here talking about trying to compromise quality for size, hehe, but I don't know where to ask this. I was planning on using these settings in ffmpeg, what do you think?
I tried it in an anime at 1080p, from 670MB to 570MB, and I wasn't able to tell the difference in quality extracting a frame form the input and the output.
ffmpeg -y -threads 4 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -i './01.mp4' -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0

 

cross-posted from: https://lemmy.pe1uca.dev/post/1137911

I need to help auditing a project from another team.
I got the pointers on what's expected to be checked, but I don't have like templates for documents for what's expected from an audit report which also means I'm not sure what's the usual process to conduct an internal audit.
I mean I might as well read the whole repo, but maybe that's too much?

Any help or pointers on what I need to investigate to get started would be great!

 

I need to help auditing a project from another team.
I got the pointers on what's expected to be checked, but I don't have like templates for documents for what's expected from an audit report which also means I'm not sure what's the usual process to conduct an internal audit.
I mean I might as well read the whole repo, but maybe that's too much?

Any help or pointers on what I need to investigate to get started would be great!

view more: next ›