this post was submitted on 10 Jul 2023
52 points (91.9% liked)

Technology

34830 readers
147 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/1305651

OpenLM-Research has Released OpenLLaMA: An Open-Source Reproduction of LLaMA

TL;DR: OpenLM-Research has released a public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.

In this repo, OpenLM-Research presents a permissively licensed open source reproduction of Meta AI's LLaMA large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture.

This is pretty incredible news for anyone working with LLaMA or other open-source LLMs. This allows you to utilize the vast ecosystem of developers, weights, and resources that have been created for the LLaMA models, which are very popular in many AI communities right now.

With this, anyone can now hop into LLaMA R&D knowing they have avenues to utilize it within their projects and businesses (commercially).

Big shoutout to the team who made this possible (OpenLM-Research). You should support them by visiting their GitHub and starring the repo.

A handful of varying parameter models have been released by this team, some of which are already circulating and being improved upon.

Yet another very exciting development for FOSS! If I recall correctly, Mark Zuckerberg mentioned in his recent podcast with Lex Fridman that the next official version of LLaMA from Meta will be open-source as well. I am very curious to see how this model develops this coming year.

If you found any of this interesting, please consider subscribing to /c/FOSAI where I do my best to keep you up to date with the most important updates and developments in the space.

Want to get started with FOSAI, but don't know how? Try starting with my Welcome Message and/or The FOSAI Nexus & Lemmy Crash Course to Free Open-Source AI.

you are viewing a single comment's thread
view the rest of the comments
[–] CommunityLinkFixer@lemmings.world 8 points 1 year ago (1 children)

Hi there! Looks like you linked to a Lemmy community using an URL instead of its name, which doesn't work well for people on different instances. Try fixing it like this: !fosai@lemmy.world

[–] Blaed@lemmy.world 1 points 1 year ago

Good bot, I will do that next time.