kevincox

joined 3 years ago
MODERATOR OF
[–] kevincox@lemmy.ml 2 points 3 months ago

IMHO it doesn't majorly change the equation. Plus in general a single-word comment is not adding much to the discussion. I like Podman and use it over Docker, but in terms of the original question I think my answer would be the same if OP was using Podman.

[–] kevincox@lemmy.ml 16 points 3 months ago (1 children)

To be fair this doesn't sound much different than your average human using the internet.

[–] kevincox@lemmy.ml 4 points 3 months ago

The Linux kernel is less secure for running untrusted software than a VM because most hypervisors have a far smaller attack surface.

how many serious organization destroying vulnerabilities have there been? It is pretty solid.

The CVEs differ? The reasons that most organizations don't get destroyed is that they don't run untrusted software on the same kernels that process their sensitive information.

whatever proprietary software thing you think is best

This is a ridiculous attack. I never suggested anything about proprietary software. Linux's KVM is pretty great.

[–] kevincox@lemmy.ml 5 points 3 months ago (2 children)

I think assuming that you are safe because you aren't aware of any vulnerabilities is bad security practice.

Minimizing your attack surface is critical. Defense in depth is just one way to minimize your attack surface (but a very effective one). Putting your container inside a VM is excellent defense in depth. Putting your container inside a non-root user barely is because you still have one Linux kernel sized hole in your swiss-cheese defence model.

[–] kevincox@lemmy.ml 6 points 3 months ago (8 children)

I never said it was trivial to escape, I just said it wasn't a strong security boundary. Nothing is black and white. Docker isn't going to stop a resourceful attacker but you may not need to worry about attackers who are going to spend >$100k on a 0-day vulnerability.

The Linux kernel isn’t easy to exploit as if it was it wouldn’t be used so heavily in security sensitive environments

If any "security sensitive" environment is relying on Linux kernel isolation I don't think they are taking their sensitivity very seriously. The most security sensitive environments I am aware of doing this are shared hosting providers. Personally I wouldn't rely on them to host anything particularly sensitive. But everyone's risk tolerance is different.

use podman with a dedicated user for sandboxing

This is only every so slightly better. Users have existed in the kernel for a very long time so may be harder to find bugs in but at the end of the day the Linux kernel is just too complex to provide strong isolation.

There isn’t any way to break out of a properly configured docker container right now but if there were it would mean that an attacker has root

I would bet $1k that within 5 years we find out that this is false. Obviously all of the publicly known vulnerabilities have been patched. But more are found all of the time. For hobbyist use this is probably fine, but you should acknowledge the risk. There are almost certainly full kernel-privilege code execution vulnerabilities in the current Linux kernel, and it is very likely that at least one of these is privately known.

[–] kevincox@lemmy.ml 8 points 3 months ago

It is. Privilege escalation vulnerabilities are common. There is basically a 100% chance of unpatched container escapes in the Linux kernel. Some of these are very likely privately known and available for sale. So even if you are fully patched a resourceful attacker will escape the container.

That being said if you are a low-value regular-joe patching regularly, the risk is relatively low.

[–] kevincox@lemmy.ml 14 points 3 months ago (10 children)

Docker (and Linux containers in general) are not a strong security boundary.

The reason is simply that the Linux kernel is far too large and complex of an interface to be vulnerability free. There are regular privilege escalation and container escapes found. There are also frequent Docker-specific container escape vulnerabilities.

If you want strong security boundaries you should use a VM, or even better separate hardware. This is why cloud container services run containers from different clients in different VMs, containers are not good enough to isolate untrusted workloads.

if Gossa were to be a virus, would I have been infected?

I would assume yes. This would require the virus to know an unpatched exploit for Linux or Docker, but these frequently appear. There are likely many for sale right now. If you aren't a high value target and your OS is fully patched then someone probably won't burn an exploit on you, but it is entirely possible.

[–] kevincox@lemmy.ml 24 points 3 months ago

More likely the overlap of "running on Linux" and "needs to run AV software for compliance" is much smaller than "running on Windows" and the latter.

I'm sure people would notice if all of the major online services started crashing.

[–] kevincox@lemmy.ml 3 points 4 months ago

Yeah sorry. I should have said "ready-to-eat food that you actually want to eat". As in hot food regularly being cooked and refrigerated food that is brought in fresh multiple times a day.

[–] kevincox@lemmy.ml 22 points 4 months ago (1 children)

This is a great point, but it probably doesn't do the job as well as more modern alternatives.

  1. Easy to lose, possible data leak concerns.
  2. Easy to retain data that should have been deleted.
  3. Easy to lose data if a disk gets lost or damaged.
  4. Likely wastes time when trying to track down the disk you need to getting someone to transfer it.
  5. Lack of access logs and auditing capabilities.
  6. Easy way for viruses to spread.

Modern IT managed file servers solve a lot of real problems when well-managed.

[–] kevincox@lemmy.ml 44 points 4 months ago (2 children)

Convenience stores in Japan are much more than the cigarettes and lottery tickets of North America. They have lots of ready-to-eat food, snacks, drinks as well as some banking services, bill payments, faxing and more.

[–] kevincox@lemmy.ml 4 points 4 months ago* (last edited 4 months ago) (2 children)

There are a few reasons. Some of them are in the users' interest. Lots of people phrase their search like a question. "How do I turn off the wifi on my blue windows 11 laptop?"

While ignoring stopwords like "the" and "a" has been common for a while there is lots of info here that the user probably doesn't actually care about. "my" is probably not helping the search, "how" may not either. Also in this case "blue" is almost certainly irrelevant. So by allowing near matches search engines can get the most helpful articles even if they don't contain all of the words.

Secondly search engines often allow stemming and synonym matching. This isn't really ignoring words but can give the appearance of doing so. For example maybe "windows" gets stemmed to "window" and "laptop" is allowed to match with "notebook". You may get an article that is talking about a window of opportunity and writing in notebooks and it seems like these words have been ignored. This is generally helpful as often the best result won't have used the exact same words that you did in the query.

Of course then there are the more negative reasons.

  1. Someone decided that you can't buy anything if your product search returns no results. So they decided that they will show the "closest matches" even if nothing is anywhere close. This is infuriating and I have stopped using many sites because of it.
  2. If you need to make more searches or view more pages you also see more ads.
view more: ‹ prev next ›