https://github.com/positive-intentions/chat
probably not... Because I'm comparing it to everything... but id like to share some details about how my app works so you can tell me what im missing. id like to have wording in my app to say something like "most secure chat app in the world"... i probably cant do that because it doesnt qualify... but i want to understand why?
im not an expert on cyber security. im sure there are many gaps in my knowlege in this domain.
using javascript, i created a chat app. it is using peerjs-server to create an encrypted webrtc connection. this is then used to exchange additional encryption keys from cryptography functions built into browsers to add a redundent layer of encryption. the key exchange is done like diffie-helman over webrtc (which can be considered secure when exchanged over public channels)
-
i sometimes recieve feedback like "javascript is inherently insecure". i disagree with this and have opened sourced my cryptography module. its basically a thin wrapper around vanilla crypto functions of a browser. a prev post on the matter.
-
another concern for my kind of app (PWA) is that the developer may introduce malicious code. this is an important point for which i open sourced the project and give instructions for selfhosting. selhosting this app has some unique features. unlike many other selfhosted projects, this app can be hosted on github-pages for free (instructions are provided in the readme). im also working on introducing a way that users can selfhost federated modules. a prev post on the matter.
-
to prevent things like browser extensions, the app uses strict CSP headers to prevent unauthorised code from running. selfhosting users should take note of this when setting up their own instance.
-
i received feedback the Signal/Simplex protocol is great, etc. id like to compare that opinion to the observation in how my todo app demo works. (the work is all experimental work-in-progress and far from finished). the demo shows a simple functionality for a basic decentralized todo list. this should already be reasonably secure. i could add a few extra endpoints for exchanging keys diffie-helman style. which at this point is relatively trivial to implement. I think it's simplicity could be a security feature.
-
the key detail that makes this approach unique, is because as a webapp, unlike other solutions, users have a choice of using any device/os/browser.
i think if i stick to the principle of avoiding using any kind of "required" service provider (myself included) and allowing the frontend and the peerjs-server to be hosted independently, im on track for creating a chat system with the "fewest moving parts". im hope you will agree this is true p2p and i hope i can use this as a step towards true privacy and security. security might be further improved by using a trusted VPN.
i created a threat-model for the app in hopes that i could get a pro-bono security assessment, but understandable the project is too complicated for pro-bono work.
while there are several similar apps out there like mine. i think mine is distinctly a different approach. so its hard to find best practices for the functionalities i want to achieve. in particular security practices to use when using p2p technology.
(note: this app is an unstable, experiment, proof of concept and not ready to replace any other app or service. It's far from finished and provided for testing and demo purposes only. This post is to get feedback on the app to determine if i'm going in the right direction for a secure chat app)
Isn't it exactly what hashing of JS libraries is for? e.g https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity that one can see on e.g https://cdnjs.com/libraries/three.js giving you the script to execute, yes, but also a hash to verify that what you receive is indeed what you expect?
So, assuming there is once a trusted loaded version (which HAS to be the case anyway otherwise you can't start, the same as one would do with a native executable) then there can't be an arbitrary version loaded next without it being validated first.
PS: I'm not saying this what OP does, I'm saying executing code (Javascript or not) that must be downloaded first is not in itself a security problem.
Ok, I looked at the Mozilla page. If I understand it right, it lets the server specify a hash that the client checks against a remote resource such as script from a CDN. So that can help notice a compromised CDN, but not a compromised server. If the hash is permanently stored in the browser, that is better, but there are also browser updates to say nothing of exploits. This approach just seems doomed.
Added: hmm maybe you could load the page from a local page or bookmarklet containing the hash. But then the whole app might as well be local. It was once possible to sign JS with a code signing certificate but I haven't heard about those in ages.
What you're looking for is called remote attestation but again, many attacks possible.
Not sure I understand the distinction, a CDN is a server, so if OP is hosting code to execute on their server, they would be checked by whatever has already been downloaded and run locally before, i.e a PWA
I'm rather sure that localStorage persists over browser updates so that can be "permanent enough"
I mean... sure but at that point the same apply to native. If you can't trust the running environment you are screwed anyway.
The idea is that the server (yoursite.com) loads some remote resources, like
<script src="crappycdn.com/react.js" hash=12345abc>
or whatever. The browser checks that the cdn sends what the server told it to expect.There is also the issue that OP apparently plans to push frequent updates to the server. Until that settles down, hash checking is useless since the code keeps changing. Also, some of us clear that local storage pretty often.
A huge buggy constantly changing program like a browser is less likely to have exploits than a simpler, single purpose program. Also, yes, the running environment that sees plaintext is within the security boundary, so you do have to worry about it. If you saw the movie "Citizenfour", the journalists communicated with Edward Snowden using laptops that were air gapped, i.e. completely disconnected from the internet. They'd get an encrypted email (GPG) on a connected computer, transfer it to the gapped machine on a USB stick(?), and decrypt and read it on the gapped machine. Even that had vulnerabilities of course.
As for using browser cryptography, I never got around to trying to understand this in detail, but there was a known incident where some Facebook app somehow intercepted the TLS encrypted traffic of other apps. Presumably they can extend such schemes to the browser libraries.
https://doubleagent.net/onavo-facebook-ssl-mitm-technical-analysis/
Because of all these issues, high security and general purpose computers/phones just don't mix. It's better to avoid pretense and just aim to make something that's reasonably secure and that's easy to use. Remember PGP stood for "pretty good privacy". That's a more realistic claim.
Anyone remember Tinfoil Hat Linux? Heeheehee.
I think for my app to be regarded well in security I think it's important for people to use their own instances. The "live app" as I call it is an experimental proof of concept. I wondering about the idea that the app is run on your own forks, but occasionally sync from upstream. As it stands my app is too garbage for anyone to want a copy, but that should eliminate those concerns.
It's also an offline first pwa. Right now it fetches the latest version, but I don't see why I can't create a toggle on the UI to not fetch if there is cache... Again the app is unstable and experimental. I'm working on fixes and improvements as I see it to make a better app. It's a while away from being able to advocate selfhosting to users. But in theory it could address your concerns?
Many attack vectors still indeed exist. With P2P web tech it seems that this allows for an interesting approach and could help reduce the attack-surface. The app is available for iOS, android and desktop. Let me know if you have more concerns.
Shrug, it sounds like you can do pretty well using browser capabilities given browser limitations. That doesn't help so much when the browser itself is a huge attack surface. The standalone apps get rid of the browser (they don't use webview or anything like that, I hope) and as such, I'd probably use them in preference to the browser version. In the end though, none of this stuff is anywhere near the level of what payments terminals or bitcoin wallets use. That's probably fine for most users.
There is a site pageintegrity.net that offers a browser extension that allows signing web pages. Again I'm dubious, but at least they are thinking about a valid problem.
I'd probably be satisfied using old fashioned unix talk (ytalk) over ssh tunnels but I think these days, you need mobile clients.