this post was submitted on 06 Oct 2024
34 points (97.2% liked)

Technology

34891 readers
773 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] yogthos@lemmy.ml 6 points 1 month ago (1 children)

I've found coding assistance to be pretty lacklustre myself as well. That said, one area where language models might actually be good is emulating a type system for dynamic languages. Given how good these things are at figuring out general shape of the code, I suspect they could fairly accurately tell you argument and return types for functions. And you could probably get away with a pretty small model if it only targets a specific language.

[–] Kache@lemm.ee 5 points 1 month ago (1 children)

Dedicated incremental static type checkers for dynamic languages already exist. In particular, Pyright for Python is fantastic and in many ways surpasses the type systems of classic typed languages

[–] yogthos@lemmy.ml 1 points 1 month ago

I'm not too familiar with tooling for Python, but my experience is that you get fairly limited support in dynamic languages unless you start adding hints. Ultimately, a static type checker can't resolve information that's not there.

[–] ksynwa@lemmygrad.ml 5 points 1 month ago (2 children)

How do the SaaS AI code assistants work? I am guessing they have to send the entire file or the codebase to their datacenter. Won't this be a problem for corporations who want to protect their codebase?

[–] onoki@reddthat.com 4 points 1 month ago* (last edited 1 month ago)

The corporations have their own contracts with e.g. Microsoft/OpenAI. The data will likely be sent the same way as with the public tools, but the providers promise not to use it for other purposes.

[–] yogthos@lemmy.ml 2 points 1 month ago

I think it's definitely an issue that the code gets sent over. While, as onoki points out, the providers promise to keep the data private this still opens up a problem that their infrastructure could be compromised.

[–] 1984@lemmy.today 5 points 1 month ago (1 children)

For devops, it's amazing. We use many tools that we are not experts in, and it's incredible to get ready to use code examples how to configure them for various scenarios.

I save many hours every week using open Ai latest models.

[–] mihor@lemmy.ml 0 points 1 month ago

I used it for GUI code as well, I hate frontend work so that saves me quite some time getting all the forms together with a few prompts.