this post was submitted on 24 Oct 2023
10 points (100.0% liked)

homeassistant

11901 readers
25 users here now

Home Assistant is open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a Raspberry Pi or a local server. Available for free at home-assistant.io

founded 1 year ago
MODERATORS
 

I am a huge HA fan since dropping FHEM a few years ago and I am more than delighted about the progress overall in HA..specially year of voice....but I can't quite wrap my head around some stack I want to achieve. Maybe one of you has a basic idea how?

I use HA with whisper/piper etc. and right now I am stuck in making data from plain text information available in piper. Like I have a wiki or a txt or anything that would work and that document has loads of text...dunno...a receipe...or some history articles.... and now I need to extract a specific piece of information from said text.

Wakeword, can you please tell my how much salt is used.

or

Jasper, in which year did I buy that console?

I am able to run all my intends but it does not make sense to write a million to achive this. I also doubt chatGPT API is a good solution for a trivial thing like summarizing (like blinkest) a text of finding a specific info and also not like good with privacy.

Any suggestions? gpt4all and some API? Lists of intends? I mean it would be nice if I could just add a nextcloud instance and some addon will scrape the info for me there.

top 2 comments
sorted by: hot top controversial new old
[โ€“] emuspawn@orbiting.observer 3 points 11 months ago (1 children)

Have you played around with hosting your own LLM? I've just started running oobabooga, it lets you download various LLMs and host them. I've been working on getting it set up so the AI can provide text for Piper, and take input from Whisper. It requires ideally an nvidia card, but will work with AMD and CPU. That would let you use the API to get text for piper to read. It's a lot more privacy oriented than sending your queries off to ChatGPT. The larger models do take more CPU/RAM/VRAM to run, but perhaps a smaller tuned model would suit your needs.

[โ€“] yournamehere@lemm.ee 1 points 11 months ago

dude i love you. great idea!