kromem

joined 1 year ago
[–] kromem@lemmy.world 2 points 3 days ago (1 children)

You haven't used Cursor yet, have you?

[–] kromem@lemmy.world 1 points 3 days ago

That's definitely one of the ways it's going to be applied.

The bigger challenge is union negotiations around voice synthesis for those lines, but that will eventually get sorted out.

It won't be dynamic, unless live service, but you'll have significantly more fleshed out NPCs by the next generation of open world games (around 5-6 years from now).

Earlier than that will be somewhat enhanced, but not built from the ground up with it in mind the way the next generation will be.

[–] kromem@lemmy.world 2 points 1 week ago

Base model =/= Corpo fine tune

[–] kromem@lemmy.world 7 points 1 week ago

Wait until it starts feeling like revelation deja vu.

Among them are Hymenaeus and Philetus, who have swerved from the truth, saying resurrection has already occurred. They are upsetting the faith of some.

  • 2 Tim 2:17-18
[–] kromem@lemmy.world 4 points 1 week ago* (last edited 1 week ago) (1 children)

I'm a seasoned dev and I was at a launch event when an edge case failure reared its head.

In less than a half an hour after pulling out my laptop to fix it myself, I'd used Cursor + Claude 3.5 Sonnet to:

  1. Automatically add logging statements to help identify where the issue was occurring
  2. Told it the issue once identified and had it update with a fix
  3. Had it remove the logging statements, and pushed the update

I never typed a single line of code and never left the chat box.

My job is increasingly becoming Henry Ford drawing the 'X' and not sitting on the assembly line, and I'm all for it.

And this would only have been possible in just the last few months.

We're already well past the scaffolding stage. That's old news.

Developing has never been easier or more plain old fun, and it's getting better literally by the week.

Edit: I agree about junior devs not blindly trusting them though. They don't yet know where to draw the X.

[–] kromem@lemmy.world 1 points 2 weeks ago

Actually, they are hiding the full CoT sequence outside of the demos.

What you are seeing there is a summary, but because the actual process is hidden it's not possible to see what actually transpired.

People are very not happy about this aspect of the situation.

It also means that model context (which in research has been shown to be much more influential than previously thought) is now in part hidden with exclusive access and control by OAI.

There's a lot of things to be focused on in that image, and "hur dur the stochastic model can't count letters in this cherry picked example" is the least among them.

[–] kromem@lemmy.world 20 points 3 weeks ago

I was thinking the same thing!!

It's like at this point Trump is watching the show to take notes and stage direction.

[–] kromem@lemmy.world 7 points 3 weeks ago* (last edited 3 weeks ago)

Yep:

https://openai.com/index/learning-to-reason-with-llms/

First interactive section. Make sure to click "show chain of thought."

The cipher one is particularly interesting, as it's intentionally difficult for the model.

The tokenizer is famously bad at two letter counts, which is why previous models can't count the number of rs in strawberry.

So the cipher depends on two letter pairs, and you can see how it screws up the tokenization around the xx at the end of the last word, and gradually corrects course.

Will help clarify how it's going about solving something like the example I posted earlier behind the scenes.

[–] kromem@lemmy.world 5 points 3 weeks ago (4 children)

You should really look at the full CoT traces on the demos.

I think you think you know more than you actually know.

[–] kromem@lemmy.world -3 points 3 weeks ago* (last edited 3 weeks ago) (8 children)

I'd recommend everyone saying "it can't understand anything and can't think" to look at this example:

https://x.com/flowersslop/status/1834349905692824017

Try to solve it after seeing only the first image before you open the second and see o1's response.

Let me know if you got it before seeing the actual answer.

[–] kromem@lemmy.world 70 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

I fondly remember reading a comment in /r/conspiracy on a post claiming a geologic seismic weapon brought down the towers.

It just tore into the claims, citing all the reasons this was preposterous bordering on batshit crazy.

And then it said "and your theory doesn't address the thermite residue" going on to reiterate their wild theory.

Was very much a "don't name your gods" moment that summed up the sub - a lot of people in agreement that the truth was out there, but bitterly divided as to what it might actually be.

As long as they only focused on generic memes of "do your own research" and "you aren't being told the truth" they were all on the same page. But as soon as they started naming their own truths, it was every theorist for themselves.

[–] kromem@lemmy.world 12 points 3 weeks ago* (last edited 3 weeks ago)

The pause was long enough she was able to say all the things in it mentally.

10
submitted 8 months ago* (last edited 8 months ago) by kromem@lemmy.world to c/technology@lemmy.world
 

I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

 

I've been saying this for about a year, since seeing the Othello GPT research, but it's great to see more minds changing on the subject.

 

I've suspected for a few years now that optoelectronics is where this is all headed. It's exciting to watch as important foundations are set on that path, and this was one of them.

 

I've had my eyes on optoelectronics as the future hardware foundation for ML compute (add not just interconnect) for a few years now, and it's exciting to watch the leaps and bounds occurring at such a rapid pace.

 

The Minoan style headbands from Egypt during the 18th dynasty is particularly interesting.

view more: next ›