this post was submitted on 19 Jun 2023
3 points (100.0% liked)

Technology

20 readers
4 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 1 year ago
top 2 comments
sorted by: hot top controversial new old
[โ€“] dack@sh.itjust.works 2 points 1 year ago (1 children)

Currently, these systems have no way to separate trusted and untrusted input. This leaves them vulnerable to prompt injection attacks in basically any scenario involving unvalidated user input. It's not clear yet how that can be solved. Until it has been solved, it seriously limits how developers can use LLMs without opening the application up to exploitation.

[โ€“] jasonmcaffee@kbin.social 1 points 1 year ago

Thanks for the feedback, I appreciate it!
While it's true that prompt injection is a problem, I'm not sure that it seriously limits how we engineers can utilize LLMs.
Take for instance a banking application. In order to interact with the application via any interface, I must be authenticated. Every authentication mechanism will ensure that users can only act on their own behalf. The functions/API which I allow the user to call via the Natural Language Interface are the same that I allow the user to call through the web or mobile.
In this scenario, I can take full advantage of the LLM, and not worry much about prompt injection, save for some affordances like no internet access or access to other inputs that didn't come from the system.
I believe prompt injection becomes a concern when you have the LLM ingest untrusted data. e.g. reading stock price from an external site could open your system up to a prompt injection.