this post was submitted on 14 Mar 2025
946 points (99.3% liked)

Technology

66363 readers
8317 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] smiletolerantly@awful.systems 11 points 4 hours ago (1 children)

I have one big frustration with that: Your voice input has to be understood PERFECTLY by TTS.

If you have a "To Do" list, and speak "Add cooking to my To Do list", it will do it! But if the TTS system understood:

  • Todo
  • To-do
  • to do
  • ToDo
  • To-Do
  • ...

The system will say it couldn't find that list. Same for the names of your lights, asking for the time,..... and you have very little control over this.

HA Voice Assistant either needs to find a PERFECT match, or you need to be running a full-blown LLM as the backend, which honestly works even worse in many ways.

They recently added the option to use LLM as fallback only, but for most people's hardware, that means that a big chunk of requests take a suuuuuuuper long time to get a response.

I do not understand why there's no option to just use the most similar command upon an imperfect matching, through something like the Levenshtein Distance.

[โ€“] Tja@programming.dev 5 points 3 hours ago

Because it takes time to implement. It will come.