chiisana

joined 1 year ago
MODERATOR OF
[–] chiisana@lemmy.chiisana.net 11 points 1 day ago

This is Apple; they value different things than most people… sometimes warranted, results in offering a much better experience, and pushes everything forward (see MagSafe -> Qi2 for recent example), other times they’re just regarded as late adopters. The detraction of visual aesthetics from folding crease is apparently one of such things that they care about.

[–] chiisana@lemmy.chiisana.net 2 points 4 days ago

Amazing stuff. Thank you so much!

[–] chiisana@lemmy.chiisana.net 1 points 4 days ago

The LM password hash (predecessor to NTLM) was calculated in two blocks of 7 characters from that truncated 14 characters. Which meant the rainbow table for that is much smaller than necessary and if your password is not 14 characters, then technically part of the hash is much easier to brute force, because the other missing characters are just padded with null.

[–] chiisana@lemmy.chiisana.net 1 points 1 week ago* (last edited 1 week ago)

If memory serves, 175B parameters is for the GPT3 model, not even the 3.5 model that caught the world by surprise; and they have not disclosed parameter space for GPT4, 4o, and o1 yet. If memory also serves, 3 was primarily English, and had only a relatively small set of words (I think 50K or something to that effect) it was considering as next token candidates. Now that it is able to work in multiple languages and multi modal, the parameter space must be much much larger.

The amount of things it can do now is incredible, but our perceived incremental improvements on LLM will probably slow down (due to the pace fitting to the predicted lines in log space)… until the next big thing (neural nets > expert systems > deep learning > LLM > ???). Such an exciting time we’re in!

Edit: found it. Roughly 50K tokens for input output embedding, in GPT3. 3Blue1Brown has a really good explanation here for anyone interested: https://youtu.be/wjZofJX0v4M

[–] chiisana@lemmy.chiisana.net -1 points 1 week ago (4 children)

The models are not wrong. The models are nothing but a statistical model that’s really good at predicting the next word that is likely to follow base on prior information given. It doesn’t have understanding of the context of the words, just that statistically they’re likely to follow. As such, all LLM outputs are correct to their design.

The users’ assumption/expectation of the output being factual is what is wrong. Hallucination is a fancy word in attempt make the users not feel as upset when the output passage doesn’t match their assumption/expectation.

[–] chiisana@lemmy.chiisana.net 5 points 1 week ago (1 children)

The network effect is too strong. The minority that are whining here isn’t going to make a dent. Next time you’re out, look at how many people are using ads ridden apps instead of paying $0.99 or whatever to remove them. The users have already decided their time and privacy is worthless and would rather getting the service for “free”.

[–] chiisana@lemmy.chiisana.net 5 points 1 week ago

4o does perform web searches, give summaries from a couple of pages, and include the link to those pages when prompted properly.

However, as most people know, first couple results doesn’t always tell the full picture and further actual researches are required… but, most “AI assistant” (also including things like those voice assistants in speakers) users tends to take the first response as fact…

¯\_(ツ)_/¯

[–] chiisana@lemmy.chiisana.net 10 points 1 week ago

Reducing ad spend on one platform, albeit often the elephant in the room for most companies’ online marketing department, isn’t going to reduce prices at the till. Companies will either reallocate the ad spend elsewhere, there by spamming more ads in front of everyone, or pocket the difference to pad their profit margin.

[–] chiisana@lemmy.chiisana.net 1 points 1 week ago (1 children)

Google did not make RCS; RCS is made by GSM consortium as succession of SMS, Google extended it to add some extra features such as end to end encryption (but only when messages are routed through their servers).

China mandated 5G sold in China must support RCS, hence why Apple added support for this. Since Google is basically banned in China, you can pretty much bet RCS going into/out of China is going to be unencrypted.

So you’re basically stuck between getting inferior unencrypted messages, or routing everything through Google.

Avoid RCS like the plague.

[–] chiisana@lemmy.chiisana.net 9 points 1 week ago (1 children)

*Product Owner

Project manager moves the project along, product owners are the ones making product direction decisions.

 

This morning, when I launched Voyager, my settings were reset. I suspect the app may have upgraded and something caused the preferences to be lost. This wasn’t the first time it happened, and who knows if the underlying conditions triggering this reset would happen again.

It would be nice if we can export our preferences into a json file (or whatever format serializes easiest), and re-import them next time the preferences gets lost, so we don’t need to manually make all the changes.

 

Due to the decentralized nature, and multiple communities on same subject exist across multiple instances, it is not uncommon for people to be subscribed to multiple communities of the same subject. It is also not uncommon for people to submit the same thing to multiple communities of the same subject, thereby resulting in multiple posts of the same content appearing in the feed. Cross post or not, the duplicated content clutter the feed, making it more difficult to consume content quickly.

I think it would be helpful to declutter by hiding/collapsing these posts. A possible implementation could be to keep an index of post titles, author, and submission time; then hide/collapse (cross)posts with same title, submitted by the same author, within some time interval (say for example +/- 1hr). That way the feed wouldn’t be as cluttered.

I understand cross referencing each post against other known posts is an exponentially large task, and could be very resources consuming, so even with the time range filter, it would be prudent to make this an option and likely disable by default to prevent performance issues.

It may be nice to inform the user on the post itself that there are other similar discussions, if they’re interested for other comments/interactions, but that’d be a nice to have in the future kind of thing.

 

I have too many machines floating around, some virtual, some physical, and they're getting added and removed semi-frequently as I play around with different tools/try out ideas. One recurring pain point is I have no easy way to manage SSH keys around them, and it's a pain to deal with adding/removing/cycling keys. I know I can use AuthorizedKeysCommand on sshd_config to make the system fetch a remote key for validation, I know I could theoretically publish my pub key to github or alike, but I'm wondering if there's something more flexible/powerful where I can manage multiple users (essentially roles) such that each machine can be assigned a role and automatically allow access accordingly?

I've seen Keyper before, but the container haven't been updated for years, and the support discord owner actively kicks everyone from the server, even after asking questions.

Is there any other solution out there that would streamline this process a bit?

 

Figured I’d share my finding here…

I got the notification for iOS 16.5.1(c) rapid security response today. Despite hearing about it breaking some sites forcing Apple to pull the update a couple weeks back from a podcast (I want to say ATP but I can’t find it in the show notes so I can’t link to the episode), I decided to install it anyway. After installing and restarting the phone, I found almost nothing works. My games spins forever, all web browsers never loads any website, but surprisingly, iMessages were flowing through.

I poked around a bit, turning wifi off and on again, using cellular data only, toggle between roaming network, etc. and nothing worked. Then I noticed the little VPN icon that flashes by so I went and disabled AdGuard VPN and things seems to work again.

Originally I uninstalled the rapid security patch, and things worked again, but then I realized I’d rather put up with some ads than deal with whatever security ramifications not having the patch would cause. Bearing in mind: the intent of these rapid security patches is that Apple thinks these patches are of utmost urgency (I.E. security issue that’s actively exploited in the wild) and they don’t want to slow people down with a big iOS upgrade, so they release and apply these patches quickly. I ended up reinstalling the patch, and turned off my AdGuard in the mean time. Hopefully AdGuard catches up and release a fix next version or two.

Anyway figured I’d drop the note here in case if anyone else is searching on their Mac trying to figure out why their iPhone isn’t working after that patch.

 

Disclaimers:

First thing first, I'm new to the whole Fediverse, and Lemmy thing, so please don't hesitate to point out any problems you're foreseeing.

Secondly, I'm by no means saying this is the ideal implementation, something something see above. Please don't hesitate to make recommendations for improvements.

Lastly, I'm not sure if it is completely working. I'm still noticing a few issues that I will document and monitor towards the end of the post. If you know of the cause or how to debug further, please do let me know!

Notes and Assumptions:

  1. I am using an ARM server. So I'm using ARM images, you will need to make sure you're using the correct architecture image.
  2. I assume you have Traefik up and running in a separate network. I used docker compose to bring traefik up, minimal configurations, and I'm just hijacking the default network there (project folder was gateway so the complete network name is gateway_default)... there's probably better ways to do this.
  3. On note of networks, I really don't like the fact that the default postgres was left wide open on the lemmyexternalproxy network. I think I've locked my down, but you may wish to double check my work.
  4. I'm not sure if what I am doing with the hostnames are correct, but it seems to work for the most part, so I'm not complaining. If there is a better way, please do advise!
  5. I used an override file for docker compose to apply extra settings. This allows me to keep the original docker-compose.yml untouched, and I can just pull in new changes (theoretically).
  6. Since I'm using traefik, I don't need nginx running doing nothing. I replaced it with a light weight alpine image that just shuts down successfully, so it doesn't use resources.

Without further delays, here's my files:

docker-compose.override.yml:

version: "3.3"

networks:
  lemmyexternalproxy:
    internal: true
  lemmygateway:
    name: gateway_default
    external: true

services:
  lemmy:
    image: dessalines/lemmy:0.17-linux-arm64
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.lemmy.entrypoints=websecure"
      - "traefik.http.routers.lemmy.rule=Host(`lemmy.chiisana.net`) && HeadersRegexp(`Accept`, `^application/`) || Host(`lemmy.chiisana.net`) && Method(`POST`) || Host(`lemmy.chiisana.net`) && PathPrefix(`/{path:(api|pictrs|feeds|nodeinfo|.well-known)}`)"
      - "traefik.http.routers.lemmy.tls=true"
      - "traefik.http.services.lemmy-svc.loadbalancer.server.port=8536"
      - "traefik.docker.network=gateway_default"
    networks:
      - lemmygateway
  lemmy-ui:
    image: dessalines/lemmy-ui:0.17-linux-arm64
    environment:
      - LEMMY_UI_HOST=0.0.0.0:1234
      - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
      - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.chiisana.net
      - LEMMY_UI_HTTPS=true
      - LEMMY_UI_DEBUG=false
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.lemmy-ui.entrypoints=websecure"
      - "traefik.http.routers.lemmy-ui.rule=Host(`lemmy.chiisana.net`)"
      - "traefik.http.routers.lemmy-ui.tls=true"
      - "traefik.http.services.lemmy-ui-svc.loadbalancer.server.port=1234"
      - "traefik.docker.network=gateway_default"
    networks:
      - lemmygateway
  proxy:
    image: alpine:latest
    command: "true"
    entrypoint: "true"
    restart: "no"
  pictrs:
    image: asonix/pictrs:0.4.0-rc.3

lemmy.hjson:

  setup: {
    admin_username: "chiisana"
    admin_password: "password-redacted-duh"
    site_name: "chiisana lemmy site"
  }
  database: {
    host: "postgres"
    user: "lemmy"
    password: "password-redacted-duh"
    database: "lemmy"
  }
  email: {
    smtp_server: "smtp.mailgun.org:587"
    smtp_login: "lemmy@chiisana.net"
    smtp_password: "password-redacted-duh"
    smtp_from_address: "lemmy@chiisana.net"
    tls_type: "tls"
  }
  pictrs: {
    url: "http://pictrs:8080/"
    api_key: "API_KEY"
  }
  hostname: "lemmy.chiisana.net"
  bind: "0.0.0.0"
  port: 8536
  tls_enabled: true
}

Known issue(s)?

  1. ~~I have my registration disabled as the instance is supposed to be just for my own auth not be depended on other instances. In my /admin section, I'm seeing a ton of users from endlesstalk.org pop up as banned users. I have no idea what that is about, as endlesstalk.org seems to also be used only by one user. I'll be monitoring this and see what's to come of it.~~ Edit: Looks like this is just the way the system is designed, and not a configuration error on my part! All good here. Thanks for clarifying it @lemmy@endlesstalk.org !
  2. I'm not sure if I'm getting all the messages federated. In this community, for example, I can see most if not all recent threads. However, most threads have no comments in it. Some newer threads, I see comments, but it seems to be incomplete. I'm not sure if I'm only supposed to receive new messages, or if something else is happening. I'll be monitoring this, and hoping the federation will just catch up over time.
  3. Edit: It would appear this post itself is not federating to !selfhosted@lemmy.world for some reason... I'm partially hoping it is just caught in some kind of moderation queue, but seeing other posts made after this appear on the list leads me to believe there's still something amiss.

If you encounter any other issue, please do post back so we can try to debug it together. Hope this helps someone!

view more: next ›