this post was submitted on 16 Jul 2023
1557 points (96.4% liked)
Memes
45730 readers
1510 users here now
Rules:
- Be civil and nice.
- Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The biggest difference (other than the existence of infinity) is that the upper limit is inclusive in summation notation and exclusive in for loops. Threw me for a loop (hah) for a while.
Nah, look at the implementation above:
Means it’s inclusive.
You’re probably referring to some other implementation that doesn’t involve such fine control, like Python where
range(4)
means[0 1 2 3]
Oh yeah, I meant generally. Isn't it most common if not best practice to say
for (i = 0; i < whatever; i++)
?Fair. I guess to accommodate zero-indexing so that it still happens
whatever
times, notwhatever + 1
times.i thought this was pretty weird too when i found out about it. i’m not entirely sure why it’s done this way but i think it has to do with conventions on where to start indexing. most programming languages start their indexing at 0 while much of the time in math the indexing starts at 1, so i=0 to n-1 becomes i=1 to n.
My abstract math professor showed us that sometimes it's useful to count natural numbers from 1 instead of 0, like in one problem we did concerning the relation Q on A = N × N defined by (m,n)Q(p,q) iff m/n = p/q. I don't hate counting natural numbers from 1 anymore because of how commonly this sort of thing comes up in non-computer math contexts.
yeah thats a good example and it shows weird the number 0 is compared to the positive integers. it seems like a lot of the time things are first "defined" for the positive integers and then afterwards the definition is extended to 0 in a "consistent way". for example, the idea of taking exponents a^n^ makes sense when n is a positive integer, but its not immediately clear how to define a^0^. so, we do some digging and see that a^m+n^ = a^m^a^n^ when m and n are positive integers. this observation makes defining a^0^=1 "consistent" with the definition on positive integers, since it makes a^m+n^ = a^m^a^n^ true when n=0.
i think this sort of thing makes mathematicians think of 0 as a weird index and its why they tend to prefer starting at 1, and then making 0 the index for the "weird" term when it's included (like the displacement vector in affine space or the constant term in a taylor series).