elshandra

joined 1 year ago
[–] elshandra@lemmy.world 1 points 5 months ago (1 children)

Just Gabe working on a god tier game.

[–] elshandra@lemmy.world 16 points 6 months ago

It's just not ready yet. Vr in general is too awkward, inconvenient and expensive. The stuff that's available now can be a lot of fun, but it's a long way from where it needs to be, to "change the world". And yeah, I wouldn't want it for free since the acquisition.

[–] elshandra@lemmy.world 2 points 6 months ago

Yeah, sometimes even whole albums need to be together. Or groups of songs. The convenience we have today is amazing.

Old and new can mix surprisingly well.

[–] elshandra@lemmy.world 4 points 6 months ago (2 children)

I had burned cds with different songs, a lot of pearl jam and nirvana, more tool though. Older stuff too like pink floyd, led zep, bowie, etc. Carried around about a dozen.

[–] elshandra@lemmy.world 0 points 6 months ago* (last edited 6 months ago) (1 children)

Community support is a thing, it's not the lack of support that's to blame here - have you ever used Microsoft support? Linux support is much more accessible even.

A lot of the blame here, is Microsoft's clever marketing campaign providing windows to educational institutions - with support - for far below cost, in the early days when pc adoption was on the rise.

Distribution saturation is a barrier to entry and focused support, and it is sometimes more complicated to install and repair. Sometimes it's easier to repair, because windows is too busy trying to hide its internals from you.

It's usually easier to support a remote IT-illiterate person using Linux, by comparison to windows, today.

e: I guess to be fair, if you factored in community support for windows, your options open up quite a lot. I was more thinking about my own interactions with their support. But enterprise support/problems are not the same as personal ones.

[–] elshandra@lemmy.world 1 points 6 months ago

Let's not argue about the potential of "any human-machine interface", because nobody knows how far that can go. We have an idea, but there's still way too much we don't understand.

You're right, humans never have and never will alone. It's a long shot, and as I said is pretty unlikely because the models will just get better at compensating. But I imagine if people were interacting with llms regularly - vocally - they would soon get tired of extended conversations to get what they want, and repeat training in forming those questions to an llm would maybe in turn reflect in their human interactions.

[–] elshandra@lemmy.world 1 points 6 months ago* (last edited 6 months ago) (2 children)

I'm going to take the time to illustrate here, how I can see LLMs affecting human speech through existing applications and technologies that are (or could) be made both available and popular enough to achieve this. We're far enough down the comment chain I can reply to myself now right?

So, we can all agree that people are increasingly using LLMs in the form of chatgpt and the like, to acquire knowledge/information. The same way as they would use a search engine to follow a link to that knowledge.

Speech-to-text has been a thing for at least 3 decades (yeah it was pretty hopeless once, but not so much now). So let's not argue about speech vs text. People already talk to Google and siri and whoever else to this end, llms. Pale have their responses read out via tts.

I remember being blown away watching a blind sysadmin interacting with a Linux shell via tts at rates I couldn't even understand the words in 1998. How far we've come. I digress, so.

We've all experienced trouble getting the information we're looking for even with all these tools. Because there's so much information, and it can be very difficult to find the needle in the haystack. So we constantly have to refine our queries either to be more specific, or exclude relationships to other information.

This in turn, causes us to think about the words we were using to get the results we want, more frequently because otherwise we spend too much time on recursion.

In turn, the more we do this, and are trained to do this, the more it will bleed into human communication.

Now look, there is absolutely a lot of hopium smoking going on here, but damn, this could have everlasting impact on verbal communication. If technology can train people - through inaccurate/incorrect results to think about the communication going out when they speak, we could drastically reduce the amount of miscommunication between people by that alone.

Imagine:

get me a chair

wheels out an office chair from the study

no I meant a chair for at the kitchen table

Vs

get me a chair for at the kitchen table

You can apply the same thing to human prompted image generation and video generation.

Now.. We don't need llms to do this, or know this. But we are never going to achieve this without a third party - the "llm", and whatever it's plugged into - because the human recipient will usually be more capable of translating these variances, or employ other contexts not as accessible via a single output as speech or text.

But if machines train us to communicate out better (more accurately, precisely and/or concisely), that is an effect I can't welcome enough.

Realistically, the machines will learn to deal with us being dumb, before we adapt.

e: formatting.

[–] elshandra@lemmy.world 1 points 6 months ago (3 children)

This is interesting and thought provoking discussion, ty.

You're absolutely right, I was looking for the dead end - plugging LLM into a solution.

I'm more thinking LLMs used in conjunction with other tech will have these effects on our communicating. LLMs, or whatever replaces them to do that interpretation, are necessary to facilitate that.

When we come up with something better, to do the same job better, then of course, LLMs will be redundant. If that happens, great.

We are already seeing a boom in popularity of LLMs outside of professional use. Global ubiquity for anything is never going to happen, unless we can fix communication, which we probably can't. We certainly can't alone. It's very much a chicken an egg problem, that we can only gain from by progressing towards.

Imagining vocallising using programming languages gave me a chuckle. I have been known to do things like use s/x/y/ to correct in written chats though.

Programming languages allow us to talk to and listen to machines. LLMs will hopefully allow machines to listen and talk to/between us.

[–] elshandra@lemmy.world 1 points 6 months ago (5 children)

But to go back to Ops original question, how will LLMs affect spoken language, they won't.

That's a rather closed minded conclusion. It makes it sound like you don't think they have the chance.

LLMs have the potential to pave the way to aligning spoken language, perhaps even evolving human communication to a point where speech is an occasional thing because it's really inefficient.

[–] elshandra@lemmy.world 0 points 6 months ago (7 children)

So I feel like we agree here. LLMs are a step to solving a low level human problem, i just don't see that as a dead end.. If we don't take the steps, we're still in the oceans. We're also learning a lot in the process ourselves, and that experience will carry on.

I appreciate your analogy, I am well aware LLMs are just clever recursive conditional queries with big semi self-updating datasets.

Regardless of whether or not something replaces LLMs in the future, the data and processing that's gone into that data, will likely be used along with the lessons were learning now. I think they're a solid investment from any angle.

[–] elshandra@lemmy.world 0 points 6 months ago (9 children)

Do you actually believe this?

LLMs are the opposite of a dead end. More like the opening of a pipe. It's not that they will burn out, it's just that they'll reach a point that they're just one function of a more complete AI perhaps.

At the very least they tackle a very difficult problem, of communication between human and machine. Their purpose is that. We have to tell machines what to do, when to do it, and how to do it. With such precision that there is no room for error. LLMs are not tools to prove truth, or anything.

If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it's job, regardless of if the answer is correct.

Validating the facts of the response is another function again, which would employ LLMs as a translation tool.

It's not a long leap from there to a language translation tool between humans, where an AI is an interpreter. deepl on roids.

[–] elshandra@lemmy.world 13 points 7 months ago

I don't need steam to install your app on my pc, unless you choose it to be that way.

view more: next ›