Post it on relevant Reddit threads?
semibreve42
In the US, with slander and libel, there are two standards.
If someone is a public figure, they need to show actual damages in order to be successful, this is the scenario you're describing.
If you are not a public figure, then you can sue for slander or libel without needing to show actual damages, just harm to your reputation or similar.
So the answer on that turns on whether Christian Selig is a public figure or not - I do not know the answer to that question.
Super cool approach. I wouldn't have guessed it would be that effective if someone had explained it to me without the data.
I'm curious how easy it is to "defeat". If you take an AI generated text that is successfully identified with high confidence and superficially edit it to include something an LLM wouldn't usually generate (like a few spelling errors), is that enough to push the text out of high confidence?
I ask because I work in higher ed, and have been sitting on the sidelines watching the chaos. My understanding is that there's probably no way to automate LLM detection to a high enough certainty for it to be used in an academic setting as cheat detection, the false positives are way too high.
Not sure then, sorry :/
Yeah. Did you down and up the docker and see if the issue is resolved?
Does your docker network setup have the lemmy server on an internal network with no external access? The default docker config is setup that way and caused what you're describing for me. Fastest fix is comment out the "internal: true" from the docker compose file, but you may want to consider the security implications of that.
I had no idea that subreddit existed… if you start a m4/3 community here I’ll join and submit some content.
Mostly I am depending on reverse proxy yes.
Otherwise there's not critical data on the box that could cause a problem for me if the server was owned and everything exfiltrated. Worst case if I had to completely wipe the box it would be annoying but not worse then that.
Interesting article, thank you for sharing.
I almost stopped reading at the octopus analogy because I think it's pretty obviously flawed and I assumed the rest of the article might be, but it wasn't.
A question I have. The subject of the article states as fact that the human mind is much more complex and functions differently then an LLM. My understanding is that we still do not have a great consensus on how our own brains operate - how we actually think. Is that out of date? I'm not suggesting we are all fundamentally "meat LLM's", to extremely simplify, but also I wasn't aware we've disproven that either.
If anyone has some good reading on the above to point to I'd love to get links!
Raspberry Pi 4 running home assistant
Intel NUC running frigate and a minecraft server
Custom built PC (i3-10100, 16gb ram, GTX1070 for transcoding. 24tb array with two parity disk, 2x 3tb ssd's in array for docker, os, etc) with quite a lot of storage running Unraid, which is my media server, backup server, and now my lemmy server.
Network is a mikrotik Hex S router and a netgear gigabit switch, with 1gb fiber internet. 2 Ubiquity AP's for wifi in the house.
Good idea. I went and doublechecked my settings, and both for my profile and for the instance settings both undetermined and english are highlighted, so unfortunately that wasn't it. Thank you though.
If a large corp wants to do what you’re suggesting, they don’t need to launch a big announced project.
They can spin up a federated instance with just one user and no references to who owns it, then have patsy accounts on other instances subscribe to their instance and get all the data they want sent to their semi secret instance.
It would be very difficult to identify this in a large, healthy federation with tons of users and lots of small personal instances.