this post was submitted on 06 Sep 2024
14 points (81.8% liked)

Programming

17453 readers
97 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

Omg it's sooo daammmn slooow it takes around 30 seconds to bulk - insert 15000 rows

Disabling indices doesn't help. Database log is at SIMPLE. My table is 50 columns wide, and from what i understand the main reason is the stupid limit of 2100 parameters in query in ODBC driver. I am using the . NET SqlBulkCopy. I only open the connection + transaction once per ~15000 inserts

I have 50 millions rows to insert, it takes literally days, please send help, i can fucking write with a pen and paper faster than damned Microsoft driver inserts rows

top 23 comments
sorted by: hot top controversial new old
[–] LainTrain@lemmy.dbzer0.com 13 points 2 months ago (4 children)

Hobbyist here, Is it normal for businesses to be having 50 mil rows to insert into a 50 columns wide database via a 2100+ parameters query, 15000 inserts at a time to a single DB?

[–] transientpunk@sh.itjust.works 6 points 2 months ago

It definitely seems unusual and poorly optimized...

Oh buddy, enjoy your life & don't touch Microsoft even with a 10 meters stick

[–] deegeese@sopuli.xyz 2 points 2 months ago (2 children)

Inserting 15k rows of 50 columns into a 50M table is something we do every day.

2100 params on a query sounds like spaghetti code.

I suspect OP is using single row insert statements when they need a bulk insert to be performant.

[–] kSPvhmTOlwvMd7Y7E@programming.dev -1 points 2 months ago

I am using SqlBulkInsert, given how bad MS is with naming things, that might as well be row inserts instead of bulks

[–] kSPvhmTOlwvMd7Y7E@programming.dev -4 points 2 months ago

2100 parameters is a documented ODBC limitation( which applies on all statements in a batch)

This means that a

"insert into (c1, c2) values (?,?), (?,?)..." can only have 2100 bound parameters, and has nothing to do with code, and even less that surrounding code is "spaghetti"

The tables ARE normalised, the fact that there are 50 colums is because underlying market - data calibration functions expects dozens of parameters, and returns back dozens of other results, such as volatility, implied durations, forward duration and more

The amount of immaturity, inexperience, and ignorance coming from 2 people here is astounding

Blocked

[–] GetOffMyLan@programming.dev 1 points 2 months ago* (last edited 2 months ago) (1 children)

No. This seems like a poorly designed system. Definitely sounds like a nosql database would be a much better fit for this task.

And that many parameters seems like madness haha

[–] kSPvhmTOlwvMd7Y7E@programming.dev -3 points 2 months ago (1 children)

Please enlighten us? You barely know anything about the system or usage, and you have deduced nosql is better? Lol

[–] GetOffMyLan@programming.dev 6 points 2 months ago* (last edited 2 months ago) (1 children)

A flat 50 column table is usually an indicator of bad design and lack of normalization.

Nosql is absolutely ideal for flat data with lots of columns and huge amounts of rows. It's like one of its main use cases.

That many parameters is an indicator of poorly structured queries and spaghetti code. There is no way that's the best way the data can be structured.

[–] RagingHungryPanda@lemm.ee 7 points 2 months ago* (last edited 2 months ago) (1 children)

I've done a lot of work and no, that is not normal.

A few things: First - SQL server has tools for migrating data that's pretty fast. SQL bulk copy can use some of these. Check to see if the built in db tools are better for this.

SQL bulk copy can handle way more than 15,000 records

Why are you wrapping a data dump in a transaction? That will slow things down for sure.

You generally shouldn't be doing huge queries like that to where you're nearing the parameter limit.

Can you share the code?

[–] kSPvhmTOlwvMd7Y7E@programming.dev 1 points 2 months ago (1 children)

I timed the transaction and opening of the connection, it takes maybe a 100 milliseconds, absolutely doesn't explain ghe abysmal performance

Transaction is needed because 2 tables are touched, i don't want to deal with partially inserted data

Cannot share the code, but it's python calling .NET through "clr", and using SqlBulkCopy

What do you suggest i shouldn't be using that? It's either a prepared query, with thousands of parameters, or a plain text string with parameters inside (which admittedly, i didn't try, might be faster lol)

[–] RagingHungryPanda@lemm.ee 3 points 2 months ago* (last edited 2 months ago)

One thing to know about transactions is that they track data and then write it. It's not the opening that slows it down. I have a question though, what is your source data? Do you have a big CSV for something? Can you do a db to db transfer instead? There's another tool called the BCP utility.

Edit: SQL server/ssms have tools for doing migrations and batch imports

[–] kyoji@programming.dev 4 points 2 months ago* (last edited 2 months ago)

Just used SqlBulkCopy via C# and .NET a few weeks ago to insert 5-7 million rows into multiple tables in a matter of seconds.

I don't think any of my tables had 50 columns, but one had maybe half of that. Reading your other posts, my experience was different in these ways:

  • Not using Python but C#
  • The machine performing the insert was physically close to the SQL server and did not utilize WAN. (Not sure if this applies to you as well, I don't recall you saying)
  • I don't remember putting a transaction on the insert. I just followed Microsoft's examples from the documentation. A transaction I think has a chance of nullifying the speed you gain from using bulk insert.

Lastly I think you should consider being be more respectful in some of your replies. We all get being frustrated with technology, but you don't need to extend that to people who are helping you for free.

[–] stalker@lemmy.ml 4 points 2 months ago (1 children)

What is your latency? Can you move data closer to where db is (cloud)? Did you change isolation level? Or recovery model? Did you try bcp? Any indexes you have in table should be deleted?

[–] kSPvhmTOlwvMd7Y7E@programming.dev 0 points 2 months ago* (last edited 2 months ago)

Will try bcp & report back EDIT: I can't install bcp because it is only distributed with SQLServer itself, and I cannot install it on my corporate laptop.

[–] cccrontab@lemmy.world 3 points 2 months ago (1 children)

Try BCP. I'm fairly new to the Microsoft landscape too, but found using BCP really helped with efficiency on loading.

[–] kSPvhmTOlwvMd7Y7E@programming.dev -1 points 2 months ago* (last edited 2 months ago) (1 children)

I will try bcp. Somehow, i was convinced I had to have access to the machine running the sql server to use it, but from the doca i see i can specify a remote host.. Will report back! EDIT: I can't install bcp because it is only distributed with SQLServer itself, and I cannot install it on my corporate laptop.

[–] cccrontab@lemmy.world 3 points 2 months ago

No, it's a standalone utility that you can download and install separate from SQL Server. It just adds BCP.exe to your command line.

Docs

Look for the link that says "Download Microsoft Command Line Utilities 15 for SQL Server (x64)".

[–] RonSijm@programming.dev 3 points 2 months ago

Omg it’s sooo daammmn slooow it takes around 30 seconds to bulk - insert 15000 rows

Do you have any measurements on how long it takes when you just 'do it raw'? Like trying to do the same insert though SQL Server Management Studio or something?

Because to me it's not really clear what's slow. Like you're complaining specifically about the Microsoft ODBC driver - but do you base that on anything? Can you insert faster from Linux or through other means?

Like if it's just 'always slow' it might just be the SQL Server. If you can better pinpoint when it's slow, and when it's fast(er) that probably helps to tell how to speed it up

[–] Randelung@lemmy.world 2 points 2 months ago

A friend of a friend found that exporting to csv and importing is the fastest route. Honestly crazy, but I recreated a test and it's actually a little faster (when dumping and recreating the whole table, ymmv when inserting).

I'm not 100% sure if it was MSSQL, though.

[–] aMockTie@beehaw.org 1 points 2 months ago (1 children)

Been a little while since I worked on ODBC stuff, but I have a couple of thoughts:

  • Would it be possible to use something like a table function on the DB side to simplify the query from the ODBC side?

  • I could be misremembering, but I feel like looping through individual inserts with an open connection was faster than trying to submit data in bulk when inserting that much data in one shot. Might be worth doing a benchmark in a test DB and table to confirm.

I know I was able to insert more than 50M rows in a manner of single digit hours, but unfortunately don't have access to that codebase anymore to double check the specifics.

[–] deegeese@sopuli.xyz 1 points 2 months ago

Looping single inserts over an open connection is far far slower than a bulk insert because every row is another transaction.

Only thing it’s faster than is if you opened and closed a connection for each row.