Technology

37609 readers
310 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
 
 

cross-posted from: https://jlai.lu/post/10771035, https://jlai.lu/post/10771034

Personal review:

A good recap of his previous writings and talks on the subject for the first third, but a bit long. Having paid attention to them for the past year or two, my attention started drifting a few times. I ended up being more impressed with how much he's managed to condense explaining "enshittification" from 45+ minutes down to around 15.

As soon as he starts building off of that to work towards the core of his message for this talk, I was more-or-less glued to the screen. At first because it's not exactly clear where he's going, and there are (what felt like) many specific court rulings to keep up with. Thankfully, once he has laid enough groundwork he gets straight his point. I don't want to spoil or otherwise lessen the performance he gives, so I won't directly comment on what his point is in the body of this post - I think the comments are better suited for that anyways.

I found the rest to be pretty compelling. He rides the fine line between directionless discontent and overenthusiastic activist-with-a-plan as he doubles down on his narrative by calling back to the various bits of groundwork he laid before - now that we're "in" on the idea, what felt like stumbling around in the dark turns into an illuminating path through some of the specifics of the last twenty to forty years of the dynamics of power between tech bosses and their employees. The rousing call to action was also great way to end and wrap it all up.

I've become very biased towards Cory Doctorow's ideas, in part because they line up with a lot of the impressions I have from my few years working as a dev in a big-ish multinational tech company. This talk has done nothing to diminish that bias - on the contrary.

3
 
 

There is an interesting study (May 2024), also linked in the article: When Online Content Disappears

Historians of the future may struggle to understand fully how we lived our lives in the early 21st Century. That's because of a potentially history-deleting combination of how we live our lives digitally – and a paucity of official efforts to archive the world's information as it's produced these days.

However, an informal group of organisations are pushing back against the forces of digital entropy – many of them operated by volunteers with little institutional support. None is more synonymous with the fight to save the web than the Internet Archive, an American non-profit based in San Francisco, started in 1996 as a passion project by internet pioneer Brewster Kahl. The organisation has embarked what may be the most ambitious digital archiving project of all time, gathering 866 billion web pages, 44 million books, 10.6 million videos of films and television programmes and more. Housed in a handful of data centres scattered across the world, the collections of the Internet Archive and a few similar groups are the only things standing in the way of digital oblivion.

"The risks are manifold. Not just that technology may fail, but that certainly happens. But more important, that institutions fail, or companies go out of business. News organisations are gobbled up by other news organisations, or more and more frequently, they're shut down," says Mark Graham, director of the Internet Archive's Wayback Machine, a tool that collects and stores snapshots of websites for posterity. There are numerous incentives to put content online, he says, but there's little pushing companies to maintain it over the long term.

Despite the Internet Archive's achievements thus far, the organisation and others like it face financial threats, technical challenges, cyberattacks and legal battles from businesses who dislike the idea of freely available copies of their intellectual property. And as recent court losses show, the project of saving the internet could be just as fleeting as the content it's trying to protect.

"More and more of our intellectual endeavours, more of our entertainment, more of our news, and more of our conversations exist only in a digital environment," Graham says. "That environment is inherently fragile."

4
5
6
 
 

Archived link

  • Research from Infyos has identified that companies accounting for 75 per cent of the global battery market have connections to one or more companies in the supply chain facing allegations of severe human rights abuses.

  • Most of the allegations of severe human rights abuses involve companies mining and refining raw materials in China that end up in batteries globally, particularly in Xinjiang Uyghur Autonomous Region (XUAR) in northwest China.

  • “The relative opaqueness of battery supply chains and the complexity of supply chain legal requirements means current approaches like ESG audits are out of date and don’t comply with new regulations. Most battery manufacturers and their customers, including automotive companies and grid-scale battery energy storage developers, still don’t have complete supply chain oversight," says Sarah Montgomery, CEO & co-founder, Infyos.

  • Supply chain changes are needed to eliminate widespread forced labour and child labour abuses occurring in the lithium-ion battery market, Infyos added.

7
8
 
 

How are they retaining staff?

9
 
 

Archived link

High-tech CCTV, super-accurate DNA-testing technology and facial tracking software: China is pushing its state-of-the-art surveillance and policing tactics abroad.

Delegates from law enforcement across the world descended this week on a port city in eastern China showcasing the work of dozens of local firms, several linked to repression in the northwestern region of Xinjiang.

China is one of the most surveilled societies on Earth, with millions of CCTV cameras scattered across cities and facial recognition technology widely used in everything from day-to-day law enforcement to political repression.

Its police serve a dual purpose: keeping the peace and cracking down on petty crime while also ensuring challenges to the ruling Communist Party are swiftly stamped out.

During the opening ceremony in Lianyungang, Jiangsu province, China's police minister lauded Beijing's training of thousands of police from abroad over the last 12 months -- and promised to help thousands more over the next year.

An analyst said this was "absolutely a sign that China aims to export" its policing.

"Beijing is hoping to normalise and legitimise its policing style and... the authoritarian political system in which it operates," Bethany Allen at the Australian Strategic Policy Institute said.

[...]

"The more countries that learn from the Chinese model, the fewer countries willing to criticise such a state-first, repressive approach."

[...]

Tech giant Huawei said its "Public Safety Solution" was now in use in over 100 countries and regions, from Kenya to Saudi Arabia.

[...]

The United States sanctioned SDIC Intelligence Xiamen Information, formerly Meiya Pico, for developing an app "designed to track image and audio files, location data, and messages on... cellphones".

In 2018, the US Treasury said residents of Xinjiang "were required to download a desktop version of" that app "so authorities could monitor for illicit activity".

China has been accused of incarcerating more than one million Uyghurs and other Muslim minorities in Xinjiang -- charges Beijing vehemently rejects.

[...]

Several delegations expressed interest in learning from the Chinese police.

"We have come to establish links and begin training," Colonel Galo Erazo from the National Police of Ecuador told AFP.

"Either Chinese police will go to Ecuador, or Ecuadorian police will come to China," he added.

One expert said that this outsourcing of security is becoming a key tool in China's efforts to promote its goals overseas.

[...]

"China's offers of police cooperation and training give them channels through which to learn how local security forces -- many either on China's periphery or in areas that Beijing considers strategically important -- view the security environment," [Sheena Greitens at the University of Texas in the U.S.] said.

"These initiatives can give China influence within the security apparatus if a threat to Chinese interests arises."

[Corrected broken link.]

10
 
 

(Seeing as I already posted an AI-is-dangerous article, here's one that shows the benefits of AI.)

Inside a bustling unit at St. Michael's Hospital in downtown Toronto, one of Shirley Bell's patients was suffering from a cat bite and a fever, but otherwise appeared fine — until an alert from an AI-based early warning system showed he was sicker than he seemed.

While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand. That warning showed the patient's white blood cell count was "really, really high," recalled Bell, the clinical nurse educator for the hospital's general medicine program.

The cause turned out to be cellulitis, a bacterial skin infection. Without prompt treatment, it can lead to extensive tissue damage, amputations and even death. Bell said the patient was given antibiotics quickly to avoid those worst-case scenarios, in large part thanks to the team's in-house AI technology, dubbed Chartwatch.

"There's lots and lots of other scenarios where patients' conditions are flagged earlier, and the nurse is alerted earlier, and interventions are put in earlier," she said. "It's not replacing the nurse at the bedside; it's actually enhancing your nursing care."

11
 
 

cross-posted from: https://feddit.org/post/2895443

Archived link

Over the past decade and a half, the Chinese techno-authoritarian state has deeply entrenched itself in the day-to-day lives of citizens through the use of highly sophisticated surveillance technology. Two of the world’s largest manufacturers of video surveillance equipment, Hikvision and Dahua, have revolutionized the industry and exported their products to hundreds of countries worldwide.

Chinese citizens are required to use their ID when engaging in various activities, from signing up for WeChat, the ubiquitous messaging app, to using super-apps like Alipay or WeChat Pay for tasks such as public transport, online shopping, and booking movie tickets.

This extensive network allows the government to track citizens’ everyday activities and create detailed profiles, effectively establishing a Panopticon state of censorship and repression.

The most prominent feature of China’s surveillance state is its extensive network of facial recognition cameras, which are nearly ubiquitous. The Chinese government launched a programme known as Skynet in 2005, which mandated the installation of millions of cameras throughout the nation.

This initiative was further expanded in 2015 with the introduction of SharpEyes, aiming for complete video coverage of ‘key public areas’ by 2020.

The government, in collaboration with camera manufacturers such as Hikvision and Dahua, framed this as a progressive step towards developing ‘smart cities’ that would enhance disaster response, traffic management, and crime detection.

However, the technology has been predominantly employed for repressive purposes, reinforcing compliance with the Communist Party of China.

[...]

Although many of the ‘threats’ identified by this system may turn out to be false alarms, the omnipresent vigilance of the state ensures that even the slightest dissent from citizens is swiftly suppressed.

[...]

China has become the first known instance of a government employing artificial intelligence for racial profiling, a practice referred to as ‘automated racism’, with its extensive facial recognition technologies specifically identifying and monitoring minority groups, particularly Uyghur Muslims, who have been subjected to numerous human rights violations by the Chinese Communist Party (CCP).

[This inlcudes] mass detentions, forced labour, religious oppression, political indoctrination, forced sterilisation and abortion, as well as sexual assault.

In Xinjiang, an extreme form of mass surveillance has transformed the province into a battleground, with military-grade cyber systems imposed on the civilian population, while the significant investment in policing and suppressing Uyghur Muslims has established Xinjiang as a testing ground for highly intrusive surveillance technologies that may be adopted by other authoritarian regimes, and the Chinese government has been known to collect DNA samples from Uyghur Muslims residing in Xinjiang, a move that has drawn widespread international condemnation for its unethical application of science and technology.

[...]

The Chinese government has adeptly formulated legislation that unites citizens and the state against private enterprises. Laws such as the Personal Information Protection Law and the Data Security Law, both enacted in 2021, impose stringent penalties on companies that fail to secure user consent for data collection, effectively diverting scrutiny away from the state’s own transgressions.

[...]

12
 
 

As investors weigh OpenAI’s valuation, they might consider the humble paperclip. A cautionary tale about corporate profit maximizers building a robot that so excels in producing the office supply that it wipes out humanity might seem far-fetched. But a single-minded capitalist could make the economically rational decision to bear such a risk. As OpenAI races towards a fundraising that could value it at $150 billion, the implicit promise is that gains enormous enough to make that danger thinkable are on the horizon. That itself underscores the barriers to growth.

The paperclip story goes like this. One day, engineers at ACME Office Supplies unveil a hyper-sophisticated AI machine with one goal: produce as many paperclips as possible. The incomparable silicon intellect chases this task to the furthest extreme, converting every molecule on Earth into paperclips and promptly ending all life.

Profit-hungry OpenAI investors like Microsoft might be assumed, like ACME, to only value short-term gains, inviting the risk that they build their own Paperclip Maximizer. Sam Altman, OpenAI’s CEO, says that he is mindful of the risk. His company’s structure is meant to limit bad incentives, capping profit available to investors. Such protections are worth an asterisk now: a ceiling on profit was set in 2019 at a 100 times return for initial investors. OpenAI initially expected to lower it over time. Instead, the company's latest fundraising now hinges on changing that structure, including by removing the cap, Reuters reported.

13
 
 

I'm sure everyone in this community is already familiar with the concept that this video is presenting, and might even already know all of the examples he gives. But I got a laugh out of it, and I love his presentation style.

14
 
 

cross-posted from: https://lemmy.world/post/20028344

Despite US dominance in so many different areas of technology, we're sadly somewhat of a backwater when it comes to car headlamps. It's been this way for many decades, a result of restrictive federal vehicle regulations that get updated rarely. The latest lights to try to work their way through red tape and onto the road are active-matrix LED lamps, which can shape their beams to avoid blinding oncoming drivers.

From the 1960s, Federal Motor Vehicle Safety Standards allowed for only sealed high- and low-beam headlamps, and as a result, automakers like Mercedes-Benz would sell cars with less capable lighting in North America than it offered to European customers.

A decade ago, this was still the case. In 2014, Audi tried unsuccessfully to bring its new laser high-beam technology to US roads. Developed in the racing crucible that is the 24 Hours of Le Mans, the laser lights illuminate much farther down the road than the high beams of the time, but in this case, the lighting tech had to satisfy both the National Highway Traffic Safety Administration and the Food and Drug Administration, which has regulatory oversight for any laser products.

The good news is that by 2019, laser high beams were finally an available option on US roads, albeit once the power got turned down to reduce their range.

NHTSA's opposition to advanced lighting tech is not entirely misplaced. Obviously, being able to see far down the road at night is a good thing for a driver. On the other hand, being dazzled or blinded by the bright headlights of an approaching driver is categorically not a good thing. Nor is losing your night vision to the glare of a car (it's always a pickup) behind you with too-bright lights that fill your mirrors.

This is where active-matrix LED high beams come in, which use clusters of controllable LED pixels. Think of it like a more advanced version of the "auto high beam" function found on many newer cars, which uses a car's forward-looking sensors to know when to dim the lights and when to leave the high beams on.

Here, sensor data is used much more granularly. Instead of turning off the entire high beam, the car only turns off individual pixels, so the roadway is still illuminated, but a car a few hundred feet up the road won't be.

Rather than design entirely new headlight clusters for the US, most OEMs' solution was to offer the hardware here but disable the beam-shaping function—easy to do when it's just software. But in 2022, NHTSA relented—nine years after Toyota first asked the regulator to reconsider its stance.

15
16
 
 

Hopper's famous 1982 lecture on "Future Possibilities: Data, Hardware, Software, and People," has long been publicly unavailable because of the obsolete media on which it was recorded. The National Archives and Records Administration (NARA) finally managed to retrieve the footage for the National Security Agency (NSA), which posted the lecture in two parts on YouTube.

17
 
 

$240 USD for an expensive piece of junk
sheeesh

18
19
20
21
22
23
24
25
view more: next ›