this post was submitted on 27 Sep 2023
93 points (95.1% liked)

Technology

59441 readers
3634 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 13 comments
sorted by: hot top controversial new old
[–] MossyFeathers@pawb.social 31 points 1 year ago

Honestly? I'm not super surprised by this. The human brain (and I assume brains in general) are really good at data compression. Considering neural networks are more or less meant to mimic different aspects of the human brain, it doesn't surprise me too much that they'd be really good at data compression as well.

[–] akrot@lemmy.world 27 points 1 year ago (2 children)

I wonder how consistent is the decompression and how much information is lost in the process.

[–] PupBiru@kbin.social 16 points 1 year ago (1 children)

i’d guess they could hyper optimise for “perceived difference” rather than data loss specifically… they do a pretty good job of generating something from nothing, so i’d say with enough data they’d probably generate a pretty reasonable facsimile of “standard” stuff

[–] Edgelord_Of_Tomorrow@lemmy.world -2 points 1 year ago (1 children)

An LLM can't know what difference a person has perceived.

[–] ilinamorato@lemmy.world 0 points 1 year ago

There have been a lot of studies done (and published) on what humans can and can't perceive. I wouldn't have much trouble believing that the LLM has access to them and can pattern match on the variables involved.

[–] PlexSheep@feddit.de 9 points 1 year ago (1 children)

So like, mp3, gzip and zstd? Why would you use a LLM for compression??

[–] rubikcuber@programming.dev 33 points 1 year ago (2 children)

The research specifically looked at lossless algorithms, so gzip

"For example, the 70-billion parameter Chinchilla model impressively compressed data to 8.3% of its original size, significantly outperforming gzip and LZMA2, which managed 32.3% and 23% respectively."

However they do say that it's not especially practical at the moment, given that gzip is a tiny executable compared to the many gigabytes of the LLM's dataset.

[–] NaibofTabr@infosec.pub 9 points 1 year ago (1 children)

Do you need the dataset to do the compression? Is the trained model not effective on its own?

[–] Tibert@compuverse.uk 12 points 1 year ago (1 children)

Well from the article a dataset is required, but not always the heavier one.

Tho it doesn't solve the speed issue, where the llm will take a lot more time to do the compression.

gzip can compress 1GB of text in less than a minute on a CPU, an LLM with 3.2 million parameters requires an hour to compress

[–] Aceticon@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

Run-length encoding algorithms (like the ones in GZIP) aren't especially amazing at compression, they're more of a balance between speed and compression ability plus they're meant to compress streams of bytes as the bytes come in.

There are better algorithms from achieving maximum compression such as substitution ones (were bytes and sets of bytes are replaced by bit sequences, the most common ones getting the shortest bit sequence, the second most common the second shortest one and so on) but they're significantly slower and need to analyse the entire file to be compressed before compressing it (and the better you want the compression to be, the more complex the analysis and the slower it gets).

Maybe the LLMs can determine upfront the most common character patterns (I use "patterns" here because it might be something more complex that mere sequences, for example a pattern could be for characters in slots 0, 3 and 4 whilst a sequence would be limited to 0, 1 and 2) and are thus much faster and more thorough at doing the analysis stage or just use it as a pre-analysed frequency model for character patterns in a given language which is superior to general run-length encoding compression (whose frequence "analysis"-ish is done as the bytes in the stream are coming in).

PS: I might be using the wrong english language terms here as I learned this compression stuff way back at Uni and in a different language.

[–] xodoh74984@lemmy.world 9 points 1 year ago

Gavin Belson has entered the chat