If i needed speed i wouldnt be programming in python.
Python
Welcome to the Python community on the programming.dev Lemmy instance!
π Events
Past
November 2023
- PyCon Ireland 2023, 11-12th
- PyData Tel Aviv 2023 14th
October 2023
- PyConES Canarias 2023, 6-8th
- DjangoCon US 2023, 16-20th (!django π¬)
July 2023
- PyDelhi Meetup, 2nd
- PyCon Israel, 4-5th
- DFW Pythoneers, 6th
- Django Girls Abraka, 6-7th
- SciPy 2023 10-16th, Austin
- IndyPy, 11th
- Leipzig Python User Group, 11th
- Austin Python, 12th
- EuroPython 2023, 17-23rd
- Austin Python: Evening of Coding, 18th
- PyHEP.dev 2023 - "Python in HEP" Developer's Workshop, 25th
August 2023
- PyLadies Dublin, 15th
- EuroSciPy 2023, 14-18th
September 2023
- PyData Amsterdam, 14-16th
- PyCon UK, 22nd - 25th
π Python project:
- Python
- Documentation
- News & Blog
- Python Planet blog aggregator
π Python Community:
- #python IRC for general questions
- #python-dev IRC for CPython developers
- PySlackers Slack channel
- Python Discord server
- Python Weekly newsletters
- Mailing lists
- Forum
β¨ Python Ecosystem:
π Fediverse
Communities
- #python on Mastodon
- c/django on programming.dev
- c/pythorhead on lemmy.dbzer0.com
Projects
- PythΓΆrhead: a Python library for interacting with Lemmy
- Plemmy: a Python package for accessing the Lemmy API
- pylemmy pylemmy enables simple access to Lemmy's API with Python
- mastodon.py, a Python wrapper for the Mastodon API
Feeds
also, if I needed speed I wouldn't be printing stuff every 100k instructions
If I needed speed, I'd be programming in Python but then profiling the performance and re-writing the inner loops and such to call C or BLAS.
Surly u can use rust these days?
In fact, Python is still decent even if you do need speed. We compared Python and Rust for algorithm processing, and we got similar-ish numbers when using numba. Rust was certainly faster, but we would need to retrain a lot of our team, and numba was plenty fast.
Python is fast enough, and if it's not, there are libraries to get it there.
It's fine, I'm not in a hurry.
so 200 to 800 microseconds on a modern cpu? Fast enough.
Is it just me... or is there a lot of python hate lately?
Nah, my personal hate of indented blocks has been there since the late 90s /s 8-)
Yes, I hate indentation as structure but I hate tracking brackets even more.
Same for me. I have used Python for most things since the late 1990s. Love Python. Have always hated the poor performance... but in my case mostly it was good enough. When it was not good enough, I wrote C code.
Python is good for problems where time to code is the limiting factor. It sucks for compute bound problems where time to execute is the limiting factor. Most problems in my world are time to code limited but some are not.
Python compute performance has always sucked.
I get that... I'm not a developer, I'm a network engineer but I use a lot of python in my day to day operations. I always took python to be the "code for non-coders" which made it infinitely more approachable than some of the other languages.
I'm not running the F1 grand prix over here, I'm driving to get groceries, so what if it's not the fastest thing out there. Close enough is good enough for me. And in my experience that's what people are using python for, daily driving.
AI crowd using python probably @Big_Boss_77 @rimu
Ahh...that makes sense. I bet you're right.
People use Python a lot as a Matlab, Excel/VBA, or R alternative. That was my use for many years. Some of these are compute focused problems and if the dataset is large enough and the computations complex enough then speed can be an issue.
As far as loading packages and printing. Who cares. These are not computationally intensive and are typically IO bound.
While the processor I'm working on right now supports 14MIPS...
I doubt it's useful for performance evaluation, however, if you are writing a paper and want to compare your algorithm to an existing one, this can be handy
Eh, maybe? It's probably only useful for large jumps, and timing is also probably good enough for that as well. With small jumps, instruction execution order matters, so a bigger number could very well be faster if it improves pipelining.
It's certainly interesting and maybe useful sometimes, but it's probably limited to people working on Python itself, not regular users.
Just remember that an optimized C program will run about 100x faster then a similar Python program in a compute bound problem. So yes Python is slow but often good enough.