BitSound

joined 1 year ago
[–] BitSound@lemmy.world 1 points 9 hours ago

What kernel are you running? From what I understand, that should be the major differentiator if you're not using S3.

[–] BitSound@lemmy.world 1 points 9 hours ago

Couldn't tell you unfortunately. It looks like AMD is also on board with deprecating S3 sleep, so I would guess that it's not significantly better. The kernel controls the newer standby modes, so it's really going to depend on how well it's supported there.

[–] BitSound@lemmy.world 9 points 13 hours ago (7 children)

Sleep kind of sucks on the original 11th gen hardware. They pushed out a bios update that broke S3 sleep, so now all you've got is the s2idle version, which the kernel is only OK at. Your laptop bag might heat up. S3 breaking isn't really their fault, Intel deprecated it. Still annoying though. I've heard the Chromebook version and other newer gens have better sleep support.

Other than that, it's great. NixOS runs just fine, even the fingerprint reader works, which has been rare for Linux

[–] BitSound@lemmy.world 11 points 1 day ago

Meshuggah:

https://www.youtube.com/watch?v=m9LpMZuBEMk

Listened to them before I got into metal, came back to them later and now love them. That's from probably one of their more accessible records, they also have more experimental stuff like this:

https://www.youtube.com/watch?v=Dw3SdOFmubU

[–] BitSound@lemmy.world 11 points 1 day ago (1 children)

Do you have any links to read up on him? I know this is a very contentious topic, but I haven't heard much about him and I'm curious. What would you hold as his worst actions?

[–] BitSound@lemmy.world 7 points 1 day ago (1 children)

It is a bold claim, but based on their success with ruff, I'm optimistic that it might pan out.

[–] BitSound@lemmy.world 10 points 1 day ago* (last edited 1 day ago) (3 children)

This is a silly argument:

[..] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’

That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.

‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we'd even get close,’ Olivia Guest adds.

That's as shortsighted as the "I think there is a world market for maybe five computers" quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren't the path to AGI, but there's no reason to think we can't achieve it in general unless you're religious.

EDIT: From the paper:

The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.

That's a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn't mean it has any relationship to the real world.

[–] BitSound@lemmy.world 11 points 2 days ago* (last edited 1 day ago)

Canonical lives and dies by the BDFL model. It allowed them to do some great work early on in popularizing Linux with lots of polish. Canonical still does good work when forced to externally, like contributing upstream. The model falters when they have their own sandbox to play in, because the BDFL model means that any internal feedback like "actually this kind of sucks" just gets brushed aside. It doesn't help that the BDFL in this case is the CEO, founder, and funder of the company and paying everyone working there. People generally don't like to risk their job to say the emperor has no clothes and all that, it's easier to just shrug your shoulders and let the internet do that for you.

Here are good examples of when the internal feedback failed and the whole internet had to chime in and say that the hiring process did indeed suck:

https://news.ycombinator.com/item?id=31426558

https://news.ycombinator.com/item?id=37059857

"markshuttle" in those threads is the owner/founder/CEO.

 

Mindustry dev has had enough

[–] BitSound@lemmy.world 10 points 2 days ago

It's a nice change of pace to see how they interact when they're not busy parenting Calvin

[–] BitSound@lemmy.world 1 points 1 week ago

Thanks, that makes sense.

[–] BitSound@lemmy.world 1 points 1 week ago

Makes sense, thanks!

 

I've encountered some conflicting usages of Tag:landuse=residential. Some areas are very specific, and broken down into individual blocks, while some areas cover multiple blocks. Here's an example of both styles adjacent to each other:

https://www.openstreetmap.org/way/653823458

https://www.openstreetmap.org/way/652122607

The wiki doesn't really say much on the topic. Does anyone have opinions/rules of thumb on how to tag them exactly? It seems like all adjacent areas not separated by major highways should be joined together?

I've encountered some residential areas that are broken down into mapping each block, and literally follow the curb, rounded corners and all. That seems too specific?

 

I'm looking at Tag:crossing=marked, and it's a little vague. It says:

Set a node on the highway where the transition is and add highway=crossing + crossing=marked.

If the crossing is also mapped as a way, tag it as highway=footway footway=crossing crossing=marked or highway=cycleway cycleway=crossing crossing=marked as appropriate.

Doesn't that violate the principle of One feature, one OSM element? For example, here's a crossing from where overpass-turbo defaults to showing:

https://www.openstreetmap.org/node/7780814396

https://www.openstreetmap.org/way/833493479

You've got a way with these tags:

crossing=marked
crossing:markings=yes
footway=crossing
highway=footway
surface=asphalt

And the intersection node with the street it's crossing has these tags:

crossing=marked
crossing:markings=yes
highway=crossing
tactile_paving=no

Shouldn't that be one or the other? It makes sense to me to represent the crossing as a way with all the tags, and leave the intersection untagged. I noticed though that StreetComplete doesn't really like that, and will give you quests to add tags to the intersection node even if the way is properly tagged.

[–] BitSound@lemmy.world 19 points 3 weeks ago (2 children)

On a related note, I think libraries do need a bit of a facelift, and not just be "the place where books live". It's important to keep that function, but also expand to "a place where learning happens". I know lots of libraries are doing this sort of thing, but your average person is probably still stuck in the "place where books live" mindset, as you allude. I'm talking stuff like 3D printers, makerspaces, diybio, classes about detecting internet bullshit, etc.

 

Original comment:

I don’t know much about voting systems, but I know someone who does. Unfortunately he’s currently banned. Maybe we can wait until his 3-month ban expires and ask him for advice?

Previous discussion

 

I've got a patio for a restaurant tagged as leisure=outdoor_seating. That page says you can add operator=* as a string, but I'm wondering if I can add a Relation between the patio and the restaurant. This is really for semantic reasons, because if the restaurant changes its name or gets a new owner, it would be nice if the patio didn't then have out-of-date information.

I don't see a Relation type that's relevant. I don't want to just start doing my own thing, so does anyone know of a way to use a Relation here, and if not, is that something that can be proposed?

Thanks for all of the responses on my other questions, btw. This community has been very helpful.

 

I'm taking a look at traffic circles like this:

https://www.openstreetmap.org/edit#map=19/33.790043/-118.142392

The main traffic circle has been split up into 8 different segments, so that individual segments can have Relations added to them, such as the "Long Beach Transit 174" bus route. I'm new to mapping, so I don't really know what to expect, but it seems odd to split it up like that. It ends up adding noise to StreetComplete, in that I can't just say "yep, this traffic circle is asphalt", I have to go to a bunch of tiny segments and mark each one of them as asphalt.

I've also seen this for items generated from Lyft data, where a single road gets split into tiny segments so that one part can be marked as "no u-turn" or "no left turn". StreetComplete wants me to mark each tiny segment individually.

 

I'm looking to tag a simple 4 way stop with typical US red/yellow/green traffic signals. I was wondering what the difference between signal and traffic_lights is in iD, and the wiki page just says this about traffic_lights:

A typical traffic signal. This value was the second most common value as of 2021-09-15 despite being undocumented until that point.

Looking at the talk page there, it links to this post, where an iD dev seems rather annoyed at the wiki:

I took a look at https://wiki.openstreetmap.org/wiki/Key:traffic_signals and now I'm furious.

Forget it.

There is no way I'm going to support traffic_signals=yes for pedestrian signals, after the wiki folks aren't even ok with iD using traffic_signals=signal for a normal traffic signal - a tagging that was accepted just not very widespread before iD started doing it.

The OSM Wiki needs to end. Seriously. It's ruining this project.

I'm using iD, so should I just leave it as the default signals and leave the fighting up to the devs? As an aside, does anyone know why there seems to be so much animosity there? Kind of surprising TBH

 

I've encountered a bus stop that still exists, but has a sign from the city saying that no busses stop there. There's the disused tag on the wiki which seems relevant, but I'm not sure how to tag it exactly. There's lots of tags like ref, route_ref, operator:wikidata and so on. Should all of those tags get prefixed with disused:?

 

I'm trying to correct local buildings on OSM. I've noticed that some of the buildings were traced before according to one set of satellite images, but are off according to others. One of the options for a background while editing that I've got is called orthoimagery. Can I assume that that is the best set of satellite images for tracing buildings from?

 

Finished reading the Remembrance of Earth's Past series (i.e. The Three-Body Problem and the other books) and have opinions. WARNING: SPOILERS

Overall I liked it a lot. I felt like the books could've been a lot tighter though, and Liu Cixin really needed an editor. Lots of cool ideas, but I did not care about the 3 old guys arguing with each other in the first part of the second book. It gave some background info, but that could've been collapsed into a few paragraphs. I also didn't need the whole backstory of some some ship's cook whose plot relevance was about 10 seconds long.

I didn't have my mind blown by the ideas in it. Not that I begrudge people that do, I'm just not lying awake worrying about the dark forest hypothesis. Maybe it's because there's not much we can do about it anyways 🤷. I did really like the recasting of string theory's 11 dimensions as not some beautiful reality of the universe, but as the result of brutal galactic warfare.

I thought the FTL communication was kind of weird for a series that mostly tried to stick to (or at least give lip service to) hard sci-fi. If you haven't seen it before, this is a good explainer of the problems with FTL communication: https://projectrho.com/public_html/rocket/fasterlight.php. In the end, I think it more wants to be cosmic horror than hard sci-fi, which is fine.

One minor nit I have is that at the very end they talk a big deal about making messages last for billions of years, and they arrive at carving messages into stone. Good idea, but even then the message got partially lost. Why not add redundancy and carve it multiple times? I also kind of expecting something "clever", like writing the message into the genes of the mobile trees or something.

 

There's a nice Hobbes being drawn on Canvas. Someone from here drawing it? Is there a template?

view more: next ›