wazowski

joined 5 years ago
[–] wazowski@lemmy.ml 9 points 2 years ago* (last edited 2 years ago)

i think that a word... can have different meanings? πŸ€·β€β™€οΈ

liberal = non-orthodox is not specific to the us, in fact, non-orthodox is one of the official definitions of the word liberal, which is befitting of many leftist policies

it just so happens that liberal is part of neoliberalism, which creates a perception of a contradiction in meaning, but in reality isn't that πŸ€·β€β™€οΈ

but important to note, all of this is in the realm of prescriptive linguistics

[–] wazowski@lemmy.ml 10 points 2 years ago (2 children)

yeah, but living life with only the essential stuff would be kinda boring, don't you think?

[–] wazowski@lemmy.ml 9 points 2 years ago

always has been 😎

[–] wazowski@lemmy.ml 10 points 2 years ago (4 children)

why?

movies are great

 
 
[–] wazowski@lemmy.ml 3 points 2 years ago

a native gui framework doesn't necessarily mean that one will have to write the application in rust: even today, when rust gui libs are in their very infancy, some already provide bindings to other languages (js included)

cross-platform native gui frameworks are super hard, but having a language that's more accessible than c++, more modern and safe, will hopefully bring this closer to reality

[–] wazowski@lemmy.ml 3 points 2 years ago (2 children)

still hoping that rust will bring us one native gui framework to rule them all that will offload at least some of the burden from electron or whatever else js developers use πŸ€·β€β™€οΈ

[–] wazowski@lemmy.ml 1 points 2 years ago

bruh 😬

2baltic4you moment

 
[–] wazowski@lemmy.ml 6 points 2 years ago

it's really a miracle how all of this is held together tbh while being so cross-platform

the core engine of torch, which contains things like vector calculus (automatic differentiation), some tensor operations, data preprocessing, data de/serialisation, et cetera, is written in regular C++, so it basically runs on anything that a C++ compiler could target, which is basically everything

the problem starts when you want to add gpu acceleration in order to speed up things like matrix multiplication (which is typically the most computationally expensive part of the machine learning pipeline)

when torch (and other ml libs) started out, cuda was basically the most advanced, easiest to use lib for gpu compute (probably still is), and nvidia gpus were far superior to anything the competition could offer, and ml on mobile devices wasn't a thing, so everyone went for it and for a long time ml existed almost solely on devices with nvidia graphics cards that could support cuda

then amd and arm started to catch up, and things like amd rocm was added to support amd gpus, vulkan was added to support both gpus on mobile devices and also nvidia and amd gpus, and at the moment all of this exists in this kind of mess, where practically all functionality is supported if you use cuda, but for rocm and vulkan a lot of things don't work, and you often have to compile everything from scratch for things to be supported

and now all of this mess is wrapped in python to simplify the api, which was a big mistake in my opinion, bc not only is the api simplification unnecessary, but now if you want to target any specific architecture, it must be supported by the core torch engine, some version of a gpu compute lib (unless you want to do inference on the cpu, which you prolly don't), and the python wrapper

so now, bc you want everything to work out of the box, all of these things are put into a binary, which results in this huge file size, and i imagine the maintenance of torch is pretty hard at least partially as a result of this

if you were building something like torch today, things are a lot more simple, bc you could just write the core engine in smth like C++, and then use smth like vulkan kompute, which is the name of a wrapper api around regular vulkan, but massively simpler and more user-friendly, and supports every gpu under the sun, and boom, you have an much more concise and easily maintainable library

[–] wazowski@lemmy.ml 3 points 2 years ago (2 children)

basically torch is an huge lib in itself, and it targets not only virtually all cpu architectures, but also multiple gpu frameworks (cuda, roc, vulkan), all off which support thousands of gpus together, both desktop and mobile

and all of this is packaged into a single binary so that it works for everyone, regardless of hardware

if you want a smaller size, you can compile it from source for your specific architecture, or download minimised precompiled versions for your target architecture

[–] wazowski@lemmy.ml 2 points 2 years ago* (last edited 2 years ago)

torch, the ml lib by facebook i'm assuming :)

 
[–] wazowski@lemmy.ml 1 points 2 years ago (1 children)

Every problem I have seen distributed consensus blockchains have been so far used for seems to have a solution that does not require such blockchains, which doesn’t have drawbacks associated with such blockchains.

a solution which doesn't have some problems associated with blockchains, but introduce their own problems, which do not exist with blockchain space

for example, you mentioned federated code hosting or social media platforms (like lemme), in this case, you eliminate certain problems exclusive to blockchains, but simultaneously introduce other problems, like more centralisation and potential for censorship (instances in the fediverse are controller by individual ppl, who have virtually complete control over that instance, and these instances are accessed using http and dns, the former of which having little resistance to censorship and the latter being mostly very centralised), which are problems already solved by blockchains

so once again, it's a question of what you value more and what's you're ready to sacrifice

3
Cyberpunked 2077 (www.youtube.com)
 
view more: next β€Ί