this post was submitted on 09 Jun 2023
7 points (100.0% liked)

Python

3222 readers
1 users here now

News and discussions about the programming language Python


founded 5 years ago
MODERATORS
 

I have a large object that I want to save to the disk because it takes a minute to generate. The OOM reaper kills the process while pickle.dump ing the object.

It's a tuple of dicts of tuple of array.array.

Can pickle dump in chunks? If not, is there another technique I can use?

top 7 comments
sorted by: hot top controversial new old
[–] alehc@lemmy.ml 4 points 1 year ago (1 children)

You mean numpy arrays? I think the most efficient way to store them is via np.save. You could try creating a new directory and store all of your arrays there with clever file naming to retrieve the dictionary structure later.

Alternatively if you are up to trying to use pytorch you can convert the arrays to tensors and use torch.save to save the entire dictionary in one file. Installing pytorch just for this might be a bit overkill as it is a >1GB installation tho.

[–] lntl@lemmy.ml 1 points 1 year ago (1 children)

It's a tuple of dicts of tuples of array.array, no numpy or torch :(

[–] alehc@lemmy.ml 2 points 1 year ago* (last edited 1 year ago)

So python standard library lists?

[–] radarsat1@lemmy.ml 2 points 1 year ago (1 children)

If you're pickling that much data you should definitely consider using a more appropriate data format. Maybe a database or HDF5?

[–] lntl@lemmy.ml 3 points 1 year ago

Agreed. When I started, things were much simpler. Trying not to revise too much code but I can if there's no other option.

[–] Biorix@lemmy.fmhy.ml 2 points 1 year ago

How are you dumping it?

Can you show us the code?

Have you tried splitting your tuple and save each dicts of the tuple separately?

[–] McWizard@feddit.de 1 points 1 year ago

A colleague of mine replaced the internal store format of pickle with Json iirc. It was like 10x faster. Not exactly sure how you do that, but I can check if you want to go that way.

load more comments
view more: next ›