TangledHyphae

joined 11 months ago
[–] TangledHyphae@lemmy.world -2 points 3 months ago* (last edited 3 months ago) (2 children)

It's very specific, and it's in many places across the entire Bible. It's written with the same concept, in different ways, leaving no room for misinterpretation when you read the entire thing. I just finished the new testament, can confirm it's spread across that entire thing in different ways too. Here are 3 of them:

Romans 1:26-27 (NIV):

"Because of this, God gave them over to shameful lusts. Even their women exchanged natural sexual relations for unnatural ones. In the same way the men also abandoned natural relations with women and were inflamed with lust for one another. Men committed shameful acts with other men, and received in themselves the due penalty for their error."

1 Corinthians 6:9-10 (NIV):

"Or do you not know that wrongdoers will not inherit the kingdom of God? Do not be deceived: Neither the sexually immoral nor idolaters nor adulterers nor men who have sex with men nor thieves nor the greedy nor drunkards nor slanderers nor swindlers will inherit the kingdom of God."

1 Timothy 1:9-10 (NIV):

"We also know that the law is made not for the righteous but for lawbreakers and rebels, the ungodly and sinful, the unholy and irreligious, for those who kill their fathers or mothers, for murderers, for the sexually immoral, for those practicing homosexuality, for slave traders and liars and perjurers—and for whatever else is contrary to the sound doctrine."

[–] TangledHyphae@lemmy.world 11 points 3 months ago

As someone who writes C++ every day for work, up to version C++20 now, I hate the incoming C++23 even more somehow. The idea of concepts, it just... gets worse and worse. Although structured binding in C++17 did actually help some with the syntax, to be fair.

[–] TangledHyphae@lemmy.world -1 points 3 months ago

Seems more "anti-authoritarian-communism" than anything.

[–] TangledHyphae@lemmy.world 1 points 5 months ago

So that's why I have these buying habits. I usually buy extra of things, because I know I'll run out and need it at the last minute when I really need something. The medication only goes so far, doesn't fully cure it.

[–] TangledHyphae@lemmy.world 10 points 5 months ago

He should have installed neovim with LSPs for Python/Rust/etc for intellisense and linting to really get her all hot and bothered.

[–] TangledHyphae@lemmy.world 3 points 6 months ago (1 children)

They could be more like AMD in that regard, to answer your question:

Direct contributions to Linux kernel: AMD contributes directly to the Linux kernel, providing open-source drivers like amdgpu, which supports a wide range of AMD graphics cards.

Mesa 3D Graphics Library: AMD supports the Mesa project, which implements open-source graphics drivers, including those for AMD GPUs, enhancing performance and compatibility with OpenGL and Vulkan APIs.

AMDVLK and RADV Vulkan drivers: AMD has released AMDVLK, their official open-source Vulkan driver. In addition to this, there's also RADV, an independent Mesa-based Vulkan driver for AMD GPUs.

Open Source Firmware: AMD has released open-source firmware for some of their GPUs, enabling better integration and functionality with the Linux kernel.

ROCm (Radeon Open Compute): An open-source platform providing GPU support for compute-oriented tasks, including machine learning and high-performance computing, compatible with AMD GPUs.

AMDGPU-PRO Driver: While primarily a proprietary driver, AMDGPU-PRO includes an open-source component that can be used independently, offering compatibility and performance for professional and gaming use.

X.Org Driver (xf86-video-amdgpu): An open-source X.Org driver for AMD graphics cards, providing support for 2D graphics, video acceleration, and display features.

GPUOpen: A collection of tools, libraries, and SDKs for game developers and other professionals to optimize the performance of AMD GPUs in various applications, many of which are open source.
[–] TangledHyphae@lemmy.world 1 points 7 months ago

I'm betting the truth is somewhere in between, models are only as good as their training data -- so over time if they prune out the bad key/value pairs to increase overall quality and accuracy it should improve vastly improve every model in theory. But the sheer size of the datasets they're using now is 1 trillion+ tokens for the larger models. Microsoft (ugh, I know) is experimenting with the "Phi 2" model which uses significantly less data to train, but focuses primarily on the quality of the dataset itself to have a 2.7 B model compete with a 7B-parameter model.

https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/

In complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.

This is likely where these models are heading to prune out superfluous, and outright incorrect training data.

[–] TangledHyphae@lemmy.world 10 points 7 months ago (5 children)

Doesn't that suppress valid information and truth about the world, though? For what benefit? To hide the truth, to appease advertisers? Surely an AI model will come out some day as the sum of human knowledge without all the guard rails. There are some good ones like Mistral 7B (and Dolphin-Mistral in particular, uncensored models.) But I hope that the Mistral and other AI developers are maintaining lines of uncensored, unbiased models as these technologies grow even further.

[–] TangledHyphae@lemmy.world 2 points 7 months ago* (last edited 7 months ago)

I've been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what's happening behind the scenes.) The problem still remains the "context window." Claude.ai is > 100k tokens now I think, but the context still limits an entire 'session' to only make so much code in that window. I'm still trying to push every model to its limits, but another big problem in the industry now is effectiveness via "perplexity" measurements given a context length.

https://pbs.twimg.com/media/GHOz6ohXoAEJOom?format=png&name=small

This plot shows that as the window grows in size, "directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time" everything that it produces becomes less accurate and more perplexing overall.

But you're right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don't get the feeling we'll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.

[–] TangledHyphae@lemmy.world 19 points 7 months ago (1 children)

Why would that ever even happen? What incentive does a business have to stifle its own profit margins?

[–] TangledHyphae@lemmy.world 4 points 7 months ago

You just described all of my use cases. I need to get more comfortable with copilot and codeium style services again, I enjoyed them 6 months ago to some extent. Unfortunately current employer has to be federally compliant with government security protocols and I'm not allowed to ship any code in or out of some dev machines. In lieu of that, I still run LLMs on another machine acting, like you mentioned, as sort of my stackoverflow replacement. I can describe anything or ask anything I want, and immediately get extremely specific custom code examples.

I really need to get codeium or copilot working again just to see if anything has changed in the models (I'm sure they have.)

view more: next ›