[HN Gopher] Define policy forbidding use of AI code generators | |
___________________________________________________________________ | |
Define policy forbidding use of AI code generators | |
Author : todsacerdoti | |
Score : 507 points | |
Date : 2025-06-25 23:26 UTC (1 days ago) | |
web link (github.com) | |
w3m dump (github.com) | |
| teruakohatu wrote: | |
| So essentially it's "let us cover ourselves by saying it's not | |
| allowed" and in practice that means not allowing code that a | |
| human thinks is AI generated code. | |
| | |
| Universities have this issue too, despite many offering students | |
| and staff Grammarly (Gen AI) while also trying to ban Gen AI. | |
| _fat_santa wrote: | |
| Well I guess the key difference is code is deterministic, that | |
| is whether an paper accomplishes it's goals is somewhat | |
| subjective but with code its an absolute certainty. | |
| | |
| I'm sure that if a contributor working on a feature used cursor | |
| to initially generate the code but then goes over it to ensure | |
| it's working as expected that would be allowed, this is more | |
| for those folks that just want to jam in a quick vibe-coded PR | |
| so they can add "contributed to the QEMU project" on their | |
| resumes. | |
| hananova wrote: | |
| You'd be wrong, the linked commit clearly says that anything | |
| written by, or derived from, AI code generation is not | |
| allowed. | |
| SchemaLoad wrote: | |
| Sounds like a good idea to ensure developers are owning the | |
| code they submit rather than hiding behind "I don't know why it | |
| does that, ChatGPT wrote it". | |
| | |
| Use AI if you want to, but if the person on the other side can | |
| tell, and you can't defend the submission as your own, that's a | |
| problem. | |
| JoshTriplett wrote: | |
| > Use AI if you want to, but if the person on the other side | |
| can tell, and you can't defend the submission as your own, | |
| that's a problem. | |
| | |
| The actual policy is "don't use AI code generators"; don't | |
| try to weasel that into "use it if you want to, but if the | |
| person on the other side can tell". That's effectively "it's | |
| only cheating if you get caught". | |
| | |
| By way of analogy, Open Source projects also typically have | |
| policies (whether written or unwritten) that you only submit | |
| code you are legally allowed to submit. In theory, you could | |
| take a pile of proprietary reverse-engineered code that you | |
| have no license to, or a pile of code from another project | |
| that you aren't respecting the license of, and submit it | |
| anyway, and slap a `Signed-off-by` on it. Nothing will | |
| physically stop you, and people _might_ not be able to tell. | |
| That doesn 't make it OK. | |
| SchemaLoad wrote: | |
| The way I interpret it is that if you brainstorm using | |
| ChatGPT but write your own code using the ideas created in | |
| this step that would be fine, the reviewer wouldn't suspect | |
| the code of being AI generated because you've made sure it | |
| fits in with the project and actually works. The exact | |
| wording here is that they will reject changes they suspect | |
| of being AI generated, not that you can't have read | |
| anything AI generated in the process. | |
| | |
| Getting AI to remind you of the libraries API is a fair bit | |
| different to having it generate 1000 lines of code you have | |
| hardly read before submitting. | |
| Art9681 wrote: | |
| What if the code is AI generated and the developer that | |
| drove it also understands the code and can explain it? | |
| Filligree wrote: | |
| Well, then you're not allowed to submit it. This isn't | |
| hard. | |
| GuB-42 wrote: | |
| It more like a clarification. | |
| | |
| The rules regarding the origin of code contributions are rather | |
| strict, that is, you can't contribute other people code unless | |
| you can make sure that the licence is appropriate. A LLM may | |
| output a copy of someone else code, sometimes verbatim, without | |
| giving you its origin, so you can't contribute code written by | |
| a LLM. | |
| Havoc wrote: | |
| I wonder whether the motivation is really legal? I get the sense | |
| that some projects are just sick of reviewing crap AI submissions | |
| SchemaLoad wrote: | |
| This could honestly break open source, with how quickly you can | |
| generate bullshit, and how long it takes to review and reject | |
| it. I can imagine more projects going the way of Android where | |
| you can download the source, but realistically you can't | |
| contribute as a random outsider. | |
| api wrote: | |
| Quality contributions to OSS are rare unless the project is | |
| huge. | |
| loeg wrote: | |
| Historically the opposite of quality contributions has been | |
| _no_ contributions, not net-negative contributions (random | |
| slop that costs more in review than it provides benefit). | |
| lmm wrote: | |
| No it hasn't? Net-negative contributions to open source | |
| have been extremely common for years, it's not like you | |
| need an LLM to make them. | |
| loeg wrote: | |
| I guess we've had very different experiences! | |
| LtWorf wrote: | |
| Nah. I've had a lot of bad contributions. One PR deleted | |
| and readded all of the lines in the project, and the | |
| entire test suite was failing. | |
| | |
| The person got upset at me for saying I could not accept | |
| such a thing. | |
| | |
| There's other examples. | |
| hollerith wrote: | |
| I've always thought that the possibility of _forking_ the | |
| project is the main benefit to open-source licensing, and we | |
| know Android can be forked. | |
| ants_everywhere wrote: | |
| the primary benefit of open source is freedom | |
| javawizard wrote: | |
| This is so tautological that I can't really tell what | |
| point you're trying to make. | |
| ants_everywhere wrote: | |
| how can it possibly be tautological? The comment just | |
| above me said something entirely different: that the | |
| primary benefit of open source is forking | |
| b00ty4breakfast wrote: | |
| I have an online acquaintance that maintains a very small and | |
| not widely used open-source project and the amount of (what | |
| we assume to be) automated AI submissions* they have to wade | |
| through is kinda wild given the very small number of | |
| contributors and users the thing has. It's gotta be clogging | |
| up these big projects like a DDoS attack. | |
| | |
| *"Automated" as in bots and "AI submissions" as in ai- | |
| generated code | |
| guappa wrote: | |
| I find that by being on codeberg instead of github i tune | |
| out a lot of the noise. | |
| zahlman wrote: | |
| For many projects you realistically can't contribute as a | |
| random outsider anyway, simply because of the effort involved | |
| in grokking enough of the existing architecture to figure out | |
| where to make changes. | |
| graemep wrote: | |
| I think it is yet another reason (potentially malicious | |
| contributors are another) that open source projects are going | |
| to have to verify contributors. | |
| disconcision wrote: | |
| i mean they say the policy is open for revision and it's also | |
| possible to make exceptions; if it's an excuse, they are going | |
| out of their way to let people down easy | |
| Lerc wrote: | |
| I'm not sure which way AI would move the dial when it comes to | |
| the median submission. Humans can, and do, make some crap code. | |
| | |
| If the problem is too many submissions, that would suggest | |
| there needs to be structures in place to manage that. | |
| | |
| Perhaps projects receiving lage quanties of updates need triage | |
| teams. I suspect most of the submissions are done in good | |
| faith. | |
| | |
| I can see some people choosing to avoid AI due to the | |
| possibility of legal issues. I'm doubtful of the likelihood of | |
| such problems, but some people favour eliminating all possibly | |
| over minimizing likelihood. The philosopher in me feels like | |
| people who think they have eliminated the possibility of | |
| something just haven't thought about it enough. | |
| catlifeonmars wrote: | |
| > If the problem is too many submissions, that would suggest | |
| there needs to be structures in place to manage that. > | |
| Perhaps projects receiving lage quanties of updates need | |
| triage teams. I suspect most of the submissions are done in | |
| good faith. | |
| | |
| This ignores the fact that many open source projects do not | |
| have the resources to dedicate to a large number of | |
| contributions. A side effect of LLM generated code is | |
| probably going to be a lot of code. I think this is going to | |
| be an issue that is not dependent on the overall quality of | |
| the code. | |
| Lerc wrote: | |
| I thought that this could be an opportunity for volunteers | |
| who can't dedicate the time to learn a codebase thoroughly | |
| enough to be a regular committer. They just have to | |
| evaluate a patch to see if it meets a threshold of quality | |
| where they can pass it on to someone who does know the | |
| codebase well. | |
| | |
| The barrier to being able to do a first commit on any | |
| project is usually quite high, there are plenty of people | |
| who would like to contribute to projects but cannnot | |
| dedicate the time n effort to pass that initial threshold. | |
| This might allow people an ability to contribute at a lower | |
| level while gently introducing them to the codebase where | |
| perhaps they might become a regular contributer in the | |
| future. | |
| ehnto wrote: | |
| Barrier of entry, automated submissions are two aspects I see | |
| changing with AI. You at least have to be able to code before | |
| submitting bad code. | |
| | |
| With AI you're going to get job hunters automating PRs for | |
| big name projects so they can stick the contributions in | |
| their resume. | |
| gerdesj wrote: | |
| The policy is concise and well bounded. It seems to me to | |
| assert that you cannot safely assign attribution of authorship | |
| of software code that you think was generated algorithmically. | |
| | |
| I use the term algorithmic because I think it is stronger than | |
| "AI lol". I note they use terms like AI code generator in the | |
| policy, which might be just as strong but looks to me as | |
| unlikely to becoming a useful legal term (its hardly "a man on | |
| the Clapham omnibus"). | |
| | |
| They finish with this, rather reasonable flourish: | |
| | |
| "The policy we set now must be for today, and be open to | |
| revision. It's best to start strict and safe, then relax." | |
| | |
| No doubt they do get a load of slop but they seem to want to | |
| close the legal angles down first and attribution seems a fair | |
| place to start off. This play book looks way better than | |
| curl's. | |
| bobmcnamara wrote: | |
| Have you seen how Monsanto enforces their seed right? | |
| esjeon wrote: | |
| Possibly, but QEMU is such a critical piece software in our | |
| industry. Its application stretches from one end to the other - | |
| desktop VM, cloud/remote instance, build server, security | |
| sandbox, cross-platform environment, etc. Even a small legal | |
| risk can hurt the industry pretty badly. | |
| daeken wrote: | |
| I've been trying out Claude Code (the tool I've found most | |
| effective in terms of agentic code gen/manipulation) for an | |
| emulator project of mine for the last few days. Part of it is a | |
| compiler from an architecture definition to | |
| disassembler/interpreter/recompiler. I hit a fairly minor | |
| compiler bug and decided to ask Claude to debug and fix it. Some | |
| things I noted: | |
| | |
| 1. My C# code compiled just fine and ran even, but it was | |
| convinced that I was missing a closing brace on a lambda near | |
| where the exception was occurring. The diff was ... Putting the | |
| existing brace on a new line. Confidently stated that was the | |
| problem and declared it fixed. | |
| | |
| 2. It did figure out that an unexpected type was being seen, and | |
| implemented a pathway that allowed for it to get to the next | |
| error, but didn't look into _why_ that type had gotten there; | |
| that was the actual bug, not the unhandled type. So it "fixed" | |
| it, but just kicked the can down the road. | |
| | |
| 3. When figuring out the issue, it just looked at the stack | |
| trace. That was it. It was running the compiler itself; it | |
| could've just embedded some debug code (like I did) and work out | |
| what the actual issue was, but it didn't even try. The exception | |
| was just a NotSupportedException with no extra details to work | |
| off of, so adding just a crumb of context would let you solve the | |
| issue. | |
| | |
| Now, is this the simplest emulator you could throw AI at? No, not | |
| at all. But neither is qemu. I'm thoroughly unconvinced that | |
| current tools could provide real value on codebases like these. | |
| I'm bullish on them for the future, and I use GenAI constantly, | |
| but this ain't a viable use case today. | |
| lowbloodsugar wrote: | |
| This is the part that makes me sure my job is safe. I use AI to | |
| _write_ code, but it really sucks at debugging. | |
| jekwoooooe wrote: | |
| When will people give up this archaic practice of sending patches | |
| over emails? | |
| SchemaLoad wrote: | |
| Sending patches over email is basically a filter for slop. | |
| Stops the low effort drive by PRs and anyone who actually wants | |
| to invest some time in to contributing won't have a problem | |
| working out the workflow. | |
| jnwatson wrote: | |
| AI can figure out how to send a patch via email a lot faster | |
| than a human. | |
| MobiusHorizons wrote: | |
| likely when it stops being a useful way to cut out noise | |
| gerdesj wrote: | |
| When enough people don't want to do it anymore. Feel free to | |
| step up, live with email patches, and add to the numbers of | |
| those who don't like it and say so. | |
| | |
| Why is it archaic if it works? I get there might be other ways | |
| to do patch sharing and discussion but what exactly is your | |
| problem with email as a transport? | |
| | |
| You might as well describe voice and ears as archaic! | |
| jekwoooooe wrote: | |
| Archaic: | |
| | |
| Very old or old fashioned | |
| wyldfire wrote: | |
| I understand where this comes from but I think it's a mistake. I | |
| agree it would be nice if there were "well settled law" regarding | |
| AI and copyright, probably relatively few rulings and next to | |
| zero legislation on which to base their feelings. | |
| | |
| In addition to a policy to reject contributions from AI, I think | |
| it may make sense to point out places where AI generated content | |
| _can_ be used. For example - how much of QEMU project 's | |
| (copious) CI setup is really stuff that is critical content to | |
| protect? What about ever-more interesting test cases or | |
| environments that could be enabled? Something like "contribute | |
| those things here instead, and make judicious use of AI there, | |
| with these kinds of guard rails..." | |
| dclowd9901 wrote: | |
| What's the risk of not doing this? Better code but slower | |
| velocity for an open source project? | |
| | |
| I think that particular brand of risk makes sense for this | |
| particular project, and the authors don't seem particularly | |
| negative toward GenAI as a concept, just going through a "one | |
| way door" with it. | |
| mrheosuper wrote: | |
| >Better code but slower velocity for an open source project | |
| | |
| Better code and "AI assist coding" are not exclusive of each | |
| other. | |
| kazinator wrote: | |
| There is a well settled practice in computing that you just | |
| don't plagiarize code. Even a small snippet. Even if copyright | |
| law would consider such a small thing "fair use". | |
| 9283409232 wrote: | |
| This isn't 100% true meaning it isn't well settled. Have | |
| people already forgotten Google vs Oracle? Google ended up | |
| winning that after years and years but the judgements went | |
| back and forth and there are around 4 or 5 guidelines to | |
| determine whether something is or isn't fair use and | |
| generative AI would fail at a few of those. | |
| kazinator wrote: | |
| Google vs. Oracle was about whether APIs are copyrightable, | |
| which is an important issue that speaks to antitrust. | |
| Oracle wanted the interface itself to be copyrighted so | |
| that even if someone reproduced the API from a description | |
| of it, it would infringe. The implication being that | |
| components which clone an API would be infringing, even | |
| though their implementation is original, discouraging | |
| competitors from making API-compatible components. | |
| | |
| My comment didn't say anything about the output of AI being | |
| fair use or not, rather that fair use (no matter where you | |
| are getting material from) _ipso facto_ doesn 't mean that | |
| copy paste is considered okay. | |
| | |
| Every employer I ever had discouraged copy and paste from | |
| anywhere as a blanket rule. | |
| | |
| At least, that had been the norm, before the LLM takeover. | |
| Obviously, organizations that use AI now for writing code | |
| are plagiarizing left and right. | |
| overfeed wrote: | |
| > Google vs. Oracle was about whether APIs are | |
| copyrightable, which is an important issue that speaks to | |
| antitrust. | |
| | |
| In addition to the Structure, Sequence and Organization | |
| claims, the original filing included a claim for | |
| copyright violation on 9 identical lines of code in | |
| _rangeCheck()_. This claim was dropped after the judge | |
| asked Oracle to reduce the number of claims, which forced | |
| Oracle to pare down to their strongest claims. | |
| bfLives wrote: | |
| > There is a well settled practice in computing that you just | |
| don't plagiarize code. Even a small snippet. | |
| | |
| I think way many developers use StackOverflow suggests | |
| otherwise. | |
| kazinator wrote: | |
| In the first place, in order to post _to_ StackOverflow, | |
| you are required to have the copyright over the code, and | |
| be able to grant them a perpetual license. | |
| | |
| They redistribute the material under the CC BY-SA 4.0 | |
| license. https://creativecommons.org/licenses/by-sa/4.0/ | |
| | |
| This allows visitors to use the material, with attribution. | |
| One can, of course, use the ideas in a SO answer to develop | |
| one's own solution. | |
| behringer wrote: | |
| Show me the professional code base with the attribution | |
| to stack overflow and I'll eat my hat. | |
| _flux wrote: | |
| Obviously I cannot show the code base, but when I pick a | |
| pre-existing solution from Stackoverflow or elsewhere-- | |
| though it is quite rare--I do add a comment linking to | |
| the source: after all, in case of SA the discussion there | |
| might be interesting for the future maintainers of the | |
| function. | |
| | |
| I just checked, though, and the code base I'm now working | |
| with has eight stackoverflow links. Not all are even | |
| written by me, according to quick check with git blame | |
| and git log -S.. | |
| graemep wrote: | |
| I always do to, for exactly the same reason. | |
| graemep wrote: | |
| > you are required to have the copyright over the code, | |
| and be able to grant them a perpetual license. | |
| | |
| Which Stack Overflow cannot verify. It might be pulled | |
| from a code base, or generated by AI (I would bet a lot | |
| is now). | |
| pavon wrote: | |
| This isn't like some other legal questions that go decades | |
| before being answered in court. There are dozens of cases | |
| working through the courts today that will shed light on some | |
| aspects of the copyright questions within a few years. QEMU has | |
| made great progress over the last 22 years without the aid of | |
| AI, waiting a few more years isn't going to hurt them. | |
| dijksterhuis wrote: | |
| It's a simpler solution is just to wait until legal situation | |
| is clearer. | |
| | |
| QEMU is (mostly) GPL 2.0 licensed, meaning (most) code | |
| contributions need to be GPL 2.0 compatible [0]. Let's say, | |
| hypothetically, there's a code contribution added by some patch | |
| involving gen AI code which is derived/memorised/copied from | |
| non-GPL compatible code [1]. Then, hypothetically, a legal case | |
| sets precedent that gen AI FOSS code must re-apply the license | |
| of the original derived/memorised/copied code. QEMU maintainers | |
| would probably need to roll back all those incompatible code | |
| contributions. After some time, those code contributions could | |
| have ended up with downstream callers which also need to be | |
| rewritten (even in CI code). | |
| | |
| It might be possible to first say "only CI code which is | |
| clearly labelled as 'DO NOT RE-USE: AI' or some such". But the | |
| maintainers would still need to go through and rewrite those | |
| parts of the CI code if this hypothetical plays out. Plus it | |
| adds extra work to reviews and merge processes etc. | |
| | |
| it's just less work and less drama for everyone involved to say | |
| "no thank you (for now)". | |
| | |
| ---- | |
| | |
| caveat: IANAL, and licensing is not my specific expertise (but | |
| i would quite like it to be one day) | |
| | |
| [0]: https://github.com/qemu/qemu/blob/master/LICENSE | |
| | |
| [1]: e.g. No license / MPL / Apache / Aritistic / Creative | |
| Commons https://www.gnu.org/licenses/license- | |
| list.html#NonFreeSoftwa... | |
| hinterlands wrote: | |
| I think you need to read between the lines here. Anything you | |
| do is a legal risk, but this particular risk seems acceptable | |
| to many of the world's largest and richest companies. QEMU | |
| isn't special, so if they're taking this position, it's most | |
| likely simply because they don't want to deal with LLM- | |
| generated code for some other reason, are eager to use legal | |
| risk as a cover to avoid endless arguments on mailing lists. | |
| | |
| We do that in corporate environments too. "I don't like this" | |
| -> "let me see what lawyers say" -> "a-ha, you can't do it | |
| because legal says it's a risk". | |
| curious_cat_163 wrote: | |
| That's very conservative. | |
| JonChesterfield wrote: | |
| Interesting. Harder line than the LLVM one found at | |
| https://llvm.org/docs/DeveloperPolicy.html#ai-generated-cont... | |
| | |
| I'm very old man shouting at clouds about this stuff. I don't | |
| want to review code the author doesn't understand and I don't | |
| want to merge code neither of us understand. | |
| compton93 wrote: | |
| _I don 't want to review code the author doesn't understand _ | |
| | |
| This really bothers me. I've had people ask me to do some task | |
| except they get AI to provide instructions on how to do the | |
| task and send me the instructions, rather than saying "Hey can | |
| you please do X". It's insulting. | |
| andy99 wrote: | |
| Had someone higher up ask about something in my area of | |
| expertise. I said I didn't think is was possible, he followed | |
| up with a chatGPT conversation he had where it "gave him some | |
| ideas that we could use as an approach", as if that was some | |
| useful insight. | |
| | |
| This is the same people that think that "learning to code" is | |
| a translation issue they don't have time for as opposed to | |
| experience they don't have. | |
| candiddevmike wrote: | |
| Imagine a boring dystopia where everyone is given | |
| hallucinated tasks from LLMs that may in some crazy way be | |
| feasible but aren't, and you can't argue that they're | |
| impossible without being fired since leadership lacks | |
| critical thinking. | |
| tines wrote: | |
| Reminds me of the wonderful skit, The Expert: | |
| https://www.youtube.com/watch?v=BKorP55Aqvg | |
| stirfish wrote: | |
| And the solution: | |
| https://www.youtube.com/watch?v=B7MIJP90biM | |
| dotancohen wrote: | |
| That is incredibly accurate - I used to be at meetings | |
| like that monthly. Please submit this as an HN | |
| discussion. | |
| whoisthemachine wrote: | |
| Unfortunately this is the most likely outcome. | |
| turol wrote: | |
| That is a very good description of the Paranoia RPG. | |
| a4isms wrote: | |
| > This is the same people that think that "learning to | |
| code" is a translation issue they don't have time for as | |
| opposed to experience they don't have. | |
| | |
| This is very, very germane and a very quotable line. And | |
| these people have been around from long before LLMs | |
| appeared. These are the people who dash off an incomplete | |
| idea on Friday afternoon and expect to see a finished | |
| product in production by next Tuesday, latest. They have no | |
| self-awareness of how much context and disambiguation is | |
| needed to go from "idea in my head" to working, | |
| deterministic software that drives something like a process | |
| change in a business. | |
| bobjordan wrote: | |
| You can change "software" to "hardware" and this is still | |
| an all too common viewpoint, even for engineers that | |
| should know better. | |
| 1dom wrote: | |
| The unfortunate truth is that approach does work, | |
| sometimes. It's really easy and common for capable | |
| engineers to think their way out of doing something | |
| because of all the different things they can think about | |
| it. | |
| | |
| Sometimes, an unreasonable dumbass whose only authority | |
| comes from corporate heirarchy is needed to mandate the | |
| engineers start chipping away at the tasks. If they | |
| weren't a dumbass, they'd know the unreasonable thing | |
| they're mandating, and if they weren't unreasonable, they | |
| wouldn't mandate the someone does it. | |
| | |
| I am an an engineer. "Sometimes" could be swapped for | |
| "rarely" above, but the point still stands: as much | |
| frustration as I have towards those people, they do | |
| occasionally lead to the impossible being delivered. But | |
| then again, a stopped clock -> twice a day etc. | |
| taleinat wrote: | |
| That approach sometimes does work, but usually very | |
| poorly and often not at all. | |
| | |
| It can work very well when the higher-up is well informed | |
| and does have deep technical experience and | |
| understanding. Steve Jobs and Elon Musk are great, well- | |
| known examples of this. They've also provided great | |
| examples of the same approach mostly failing when applied | |
| outside of their areas of deep expertise and | |
| understanding. | |
| lowbloodsugar wrote: | |
| if they're only right twice a day, you can run out of | |
| money doing stupid things before you hit midnight. in | |
| practice, there's a difference between a PHB asking a | |
| "stupid" question that leads to engineers having a | |
| lightbulb moment, vs a PHB insisting on going down a | |
| route that will never work. | |
| alluro2 wrote: | |
| A friend experienced a similar thing at work - he gave a | |
| well-informed assessment of why something is difficult to | |
| implement and it would take a couple of weeks, based on the | |
| knowledge of the system and experience with it - only for | |
| the manager to reply within 5 min with a screenshot of an | |
| (even surprisingly) idiotic ChatGPT reply, and a message | |
| along the lines of "here's how you can do it, I guess by | |
| the end of the day". | |
| | |
| I know several people like this, and it seems they feel | |
| like they have god powers now - and that they alone can | |
| communicate with "the AI" in this way that is simply | |
| unreachable by the rest of the peasants. | |
| OptionOfT wrote: | |
| Same here. You throw a question in a channel. Someone | |
| responds in 1 minute with a code example that either you | |
| had laying around, or would take > 5 minutes to write. | |
| | |
| The code example was AI generated. I couldn't find a | |
| single line of code anywhere in any codebase. 0 examples | |
| on GitHub. | |
| | |
| And of course it didn't work. | |
| | |
| But, it sent me on a wild goose because I trusted this | |
| person to give me a valuable insight. It pisses me off so | |
| much. | |
| mailund wrote: | |
| I experienced mentioning an issue I was stuck on during | |
| standup one day, then some guy on my team DMs me a | |
| screenshot of chatGPT with text about how to solve the | |
| issue. When I explained to him why the solution he had | |
| sent me didn't make sense and wouldn't solve the issue, | |
| he sent me back the reply the LLM would give by pasting | |
| in my reply, at which point I stopped responding. | |
| | |
| I'm just really confused what people who send LLM content | |
| to other people think they are achieving? Like if I | |
| wanted an LLM response, I would just prompt the LLM | |
| myself, instead of doing it indirectly though another | |
| person who copy/pastes back and forth. | |
| AdieuToLogic wrote: | |
| > I know several people like this, and it seems they feel | |
| like they have god powers now - and that they alone can | |
| communicate with "the AI" in this way that is simply | |
| unreachable by the rest of the peasants. | |
| | |
| A far too common trap people fall into is the fallacy of | |
| "your job is easy as all you have to do is <insert | |
| trivialization here>, but my job is hard because ..." | |
| | |
| Statistically generated text (token) responses | |
| constructed by LLM's to simplistic queries are an | |
| accelerant to the self-aggrandizing problem. | |
| spit2wind wrote: | |
| Sounds like a teachable moment. | |
| | |
| If it's that simple, sounds like you've got your | |
| solution! Go ahead and take care of it. If it fits V&V | |
| and other normal procedures, like passing tests and | |
| documentation, then we'll merge it in. Shouldn't be a | |
| problem for you since it will only take a moment. | |
| alluro2 wrote: | |
| Absolutely agree :) If only he wasn't completely non- | |
| technical, managing a team of ~30 devs of varying skill | |
| levels and experience - which is the root cause of most | |
| of the issues, I assume. | |
| latexr wrote: | |
| > and a message along the lines of "here's how you can do | |
| it, I guess by the end of the day". | |
| | |
| -- How about you do it, motherfucker?! If it's that | |
| simple, you do it! And when you can't, I'll come down | |
| there, push your face on the keyboard, and burn your | |
| office to the ground, how about that? | |
| | |
| -- Well, you don't have to get mean about it. | |
| | |
| -- Yeah, I do have to get mean about it. Nothing worse | |
| than an ignorant, arrogant, know-it-all. | |
| | |
| If Harlan Ellison were a programmer today. | |
| | |
| https://www.youtube.com/watch?v=S-kiU0-f0cg&t=150s | |
| alluro2 wrote: | |
| Hah, that's a good clip :) Those "angry people" are | |
| really essential as an outlet for the rest of us. | |
| alganet wrote: | |
| In corporate, you are _forced_ to trust your coworker | |
| somehow and swallow it. Specially higher-ups. | |
| | |
| In free software though, these kinds of nonsense | |
| suggestions always happened, way before AI. Just look at | |
| any project mailing list. | |
| | |
| It is expected that any new suggestion will encounter some | |
| resistance, the new contributor itself should be aware of | |
| that. For serious projects specifically, the levels of | |
| skepticism are usually way higher than corporations, and | |
| that's healthy and desirable. | |
| colechristensen wrote: | |
| People keep asking me if AI is going to take my job and | |
| recent experience shows that it very much is not. AI is | |
| great for being mostly correct and then giving someone | |
| without enough context a mostly correct way to shoot | |
| themselves in the foot. | |
| | |
| AI further encourages the problem in DevOps/Systems | |
| Engineering/SRE where someone comes to you and says "hey | |
| can you do this for me" having come up with the solution | |
| instead of giving you the problem "hey can you help me | |
| accomplish this"... AI gives them solutions which is more | |
| steps away to detangle into what really needs to be done. | |
| | |
| AI has knowledge, but it doesn't have taste. Especially | |
| when it doesn't have all of the context a person with | |
| experience, it just has bad taste in solutions or just the | |
| absence of taste but with the additional problem that it | |
| makes it much easier for people to do things. | |
| | |
| Permissions on what people have access to read and | |
| permission to change is now going to have to be more | |
| restricted because not only are we dealing with folks who | |
| have limited experience with permissions, now we have them | |
| empowered by AI to do more things which are less advisable. | |
| MoreQARespect wrote: | |
| The question about whether it takes jobs away is more | |
| whether one programmer with taste can multiply their | |
| productivity between ~3-15x and take the same salary | |
| while demand for coding remains constant. It's less about | |
| whether the tool can directly replace 100% of the | |
| functions of a good programmer. | |
| joshstrange wrote: | |
| I've started to experience/see this and it makes me want to | |
| scream. | |
| | |
| You can't dismiss it out of hand (especially with it coming | |
| from up the chain) but it takes no time at all to generate | |
| by someone who knows nothing about the problem space (or | |
| worse, just enough to be dangerous) and it could take hours | |
| or more to debunk/disprove the suggestion. | |
| | |
| I don't know what to call this? Cognitive DDOS? Amplified | |
| Plausibility Attack? There should be a name for it and it | |
| should be ridiculed. | |
| whatevertrevor wrote: | |
| It's simply the Bullshit Asymmetry Principle/Brandolini's | |
| Law. It's just that bullshit generation speedrunners have | |
| recently discovered tool-assists. | |
| petesergeant wrote: | |
| > Had someone higher up ask about something in my area of | |
| expertise. I said I didn't think is was possible, he | |
| followed up with a chatGPT conversation he had where it | |
| "gave him some ideas that we could use as an approach", as | |
| if that was some useful insight. | |
| | |
| I would find it very insulting if someone did this to me, | |
| for sure, as well as a huge waste of my time. | |
| | |
| On the other hand I've also worked with some very | |
| intransigent developers who've actively fought against | |
| things they simply didn't want to do on flimsy technical | |
| grounds, knowing it couldn't be properly challenged by the | |
| requester. | |
| | |
| On yet another hand, I've also been subordinate to people | |
| with a small amount of technical knowledge -- or a small | |
| amount of knowledge about a specific problem -- who'll do | |
| the exact same thing without ChatGPT: fire a bunch of mid- | |
| wit ideas downstream that you have already thought about, | |
| but you then need to spend a bunch of time explaining why | |
| their hot-takes aren't good. Or the CEO of a small digital | |
| agency I worked at circa 2004 asking us if we'd ever | |
| considered using CSS for our projects (which were of course | |
| CSS heavy). | |
| sltr wrote: | |
| Reminds me of "Appeal to Aithority". (not a typo) | |
| | |
| An LLM said it, so it must be true. | |
| | |
| https://blog.ploeh.dk/2025/03/10/appeal-to-aithority/ | |
| masfuerte wrote: | |
| You should send him a chatGPT critique of his management | |
| style. | |
| | |
| (Or not, unless you enjoy workplace drama.) | |
| itslennysfault wrote: | |
| At a company I used to work at I saw the CEO do this | |
| publicly (on slack) to the CTO who was an absolute expert | |
| on the topic at hand, and had spent 1000s of hours | |
| optimizing a specific system. Then, the CEO comes in and | |
| says I think this will fix our problems (link to ChatGPT | |
| convo). SOO insulting. That was the day I decided I should | |
| start looking for a new job. | |
| nijave wrote: | |
| Especially when you try to correct them and they insist AI is | |
| the correct one | |
| | |
| Sometimes it's fun reverse engineering the directions back | |
| into various forum, Stack Overflow, and documentation | |
| fragments and pointing out how AI assembled similar things | |
| into something incorrect | |
| windward wrote: | |
| It's the modern equivalent of sending a LMGTFY link, except | |
| the insult is from them being purely credulous and sincere | |
| guappa wrote: | |
| My company hired a new CTO and he asked chatgpt to write some | |
| lengthy documents about "how engineering gets done in our | |
| company". | |
| | |
| He also writes all his emails with chatgpt. | |
| | |
| I don't bother reading. | |
| | |
| Oddly enough he recently promoted a guy who has been fucking | |
| around with LLMs for years instead of working as his right | |
| hand man. | |
| JonChesterfield wrote: | |
| That's directly lethal, in a limited sympathy with | |
| engineers that don't immediately head for the exit sort of | |
| fashion. Best of luck | |
| guappa wrote: | |
| The most experienced people quit, yes. There's some other | |
| not as experienced who are left, but seeing how a noob | |
| with less seniority and a large ego is now their boss, I | |
| expect they're proof reading their CVs as well. | |
| | |
| I think under current management immigrants have no | |
| chance of getting promoted. | |
| latexr wrote: | |
| > Oddly enough he recently promoted a guy who has been | |
| fucking around with LLMs for years instead of working as | |
| his right hand man. | |
| | |
| Why is that odd? From the rest of your description, it | |
| seems entirely predictable. | |
| dheera wrote: | |
| > I don't want to review code the author doesn't understand | |
| | |
| The author is me and my silicon buddy. _We_ understand this | |
| stuff. | |
| recursive wrote: | |
| Of course we understand it. Just ask us! | |
| halostatue wrote: | |
| I have just started adding DCO to _all_ of the open source code | |
| that I maintain and will be adding text like this on | |
| `CONTRIBUTING.md`: | |
| | |
| --- | |
| | |
| LLM-Generated Contribution Policy | |
| | |
| Color is a library full of complex math and subtle decisions | |
| (some of them possibly even wrong). It is extremely important | |
| that any issues or pull requests be well understood by the | |
| submitter and that, especially for pull requests, the developer | |
| can attest to the Developer Certificate of Origin for each pull | |
| request (see LICENCE). | |
| | |
| If LLM assistance is used in writing pull requests, this must | |
| be documented in the commit message and pull request. If there | |
| is evidence of LLM assistance without such declaration, the | |
| pull request will be declined. | |
| | |
| Any contribution (bug, feature request, or pull request) that | |
| uses unreviewed LLM output will be rejected. | |
| | |
| --- | |
| | |
| I am also adding this to my `SECURITY.md` entries: | |
| | |
| --- | |
| | |
| LLM-Generated Security Report Policy | |
| | |
| Absolutely no security reports will be accepted that have been | |
| generated by LLM agents. | |
| | |
| --- | |
| | |
| As it's mostly just me, I'm trying to strike a balance, but my | |
| preference is against LLM generated contributions. | |
| japhyr wrote: | |
| > any issues or pull requests be well understood by the | |
| submitter | |
| | |
| I really like this phrasing, particularly in regards to PRs. | |
| I think I'll find a way to incorporate this into my projects. | |
| Even for smaller, non-critical projects, it's such a | |
| distraction to deal with people trying to make | |
| "contributions" that they don't clearly understand. | |
| brulard wrote: | |
| Good luck detecting the LLM use | |
| jitl wrote: | |
| When I use LLM for coding tasks, it's like "hey please | |
| translate this YAML to structs and extract any repeated | |
| patterns to re-used variables". It's possible to do this | |
| transform with deterministic tools, but AI will do a fine job | |
| in 30s and it's trivial to test the new output is identical to | |
| the prompt input. | |
| | |
| My high-level work is absolutely impossible to delegate to AI, | |
| but AI really helps with tedious or low-stakes incidental | |
| tasks. The other day I asked Claude Code to wire up some graphs | |
| and outlier analysis for some database benchmark result CSVs. | |
| Something conceptually easy, but takes a fair bit of time to | |
| figure out libraries and get everything hooked up unless you're | |
| already an expert at csv processing. | |
| mistrial9 wrote: | |
| oh agree and amplify this -- graphs are worlds unto | |
| themselves. some of the high end published research papers | |
| have astounding contents, for example.. | |
| mattmanser wrote: | |
| In my experience, AI will not do a fine job of things like | |
| this. | |
| | |
| If the definition is past any sort of length, it will | |
| hallucinate new properties, change the names, etc. It also | |
| has a propensity to start skipping bits of the definitions by | |
| adding in comments like "/** more like this here **/" | |
| | |
| It may work for you for small YAML files, but beware doing | |
| this for larger ones. | |
| | |
| Worst part about all that is that it looks right to begin | |
| with because the start of the definitions will be correct, | |
| but there will be mistakes and stuff missing. | |
| | |
| I've got a PoC hanging around where I did something similar | |
| by throwing an OpenAPI spec at an AI and telling it to | |
| generate some typescript classes because I was being lazy and | |
| couldn't be bothered to run it through a formal tool. | |
| | |
| Took me a while to notice a lot of the definitions had subtle | |
| bugs, properties were missing and it had made a bunch of | |
| stuff up. | |
| danielbln wrote: | |
| What does "AI" mean? GPT3.5 on a website, or Claude 4 Opus | |
| plugged into function calling and a harness of LSP, type | |
| checker and tool use? These are not the same, neither in | |
| terms of output quality nor in capability space. We need to | |
| be more specific about the tools we use when we discuss | |
| them. "IDEs are slow to load" wouldn't be a useful | |
| statement either. | |
| mattmanser wrote: | |
| How do any of those things help with it recognizing it's | |
| hallucinated new property names? | |
| | |
| The types don't exist outside of the yaml/json/etc. | |
| | |
| You can't check them. | |
| jitl wrote: | |
| For bigger inputs I have the AI write the new output to an | |
| adjacent file and diff the two to confirm equivalence | |
| stefanha wrote: | |
| There is ongoing discussion about this topic in the QEMU AI | |
| policy: https://lore.kernel.org/qemu- | |
| devel/20250625150941-mutt-send-... | |
| phire wrote: | |
| I do use GitHub copilot on my personal projects. | |
| | |
| But I refuse to use it as anything more than a fancy | |
| autocomplete. If it suggests code that's pretty close to what I | |
| was about to type anyway, I accept it. | |
| | |
| This ensures that I still understand my code, that there | |
| shouldn't be any hallucination derived bugs, [1] and there | |
| really shouldn't be any questions about copyright if I was | |
| about to type it. | |
| | |
| I find using copilot this way speeds me up. Not really because | |
| my typing is slow, it's more that I have a habit of getting | |
| bored and distracted while typing. Copilot helps me get to the | |
| next thinking/debugging part sooner. | |
| | |
| My brain really comprehend the idea that anyone would not want | |
| to not understand their code. Especially if they are going to | |
| submit it as a PR. | |
| | |
| And I'm a little annoyed that the existence of such people is | |
| resulting in policies that will stop me from using LLMs as | |
| autocomplete when submitting to open source projects. | |
| | |
| I have tried using copilot in other ways. I'd love for it to be | |
| able to do menial refactoring tasks for me. But every-time I | |
| experiment, it seems to fall off the rails so fast. Or it just | |
| ends up slower than what I could do manually because it has to | |
| re-generate all my code instead of just editing it. | |
| | |
| [1] _Though I find it really interesting that if I 'm in the | |
| middle of typing a bug, copilot is very happy to autocomplete | |
| it in its buggy form. Even when the bug is obvious from local | |
| context, like I've typoed a variable name._ | |
| dawnerd wrote: | |
| That's how I use it too. I've tried to make agent mode work | |
| but it ends up taking just as long if not longer than just | |
| making the edits myself. And unless you're very narrowly | |
| specific models like sonnet will go off track making changes | |
| you never asked for. At least gpt4.1 is pretty lazy I guess. | |
| hsbauauvhabzb wrote: | |
| You're the exact kind of person I want to work with. Self | |
| reflective and in opposition of lazy behaviours. | |
| rodgerd wrote: | |
| This to me is interesting when it comes to free software | |
| projects; sure there are a lot of people contributing as their | |
| day job. But if you contribute or manage a project for the | |
| pleasure of it, things which undermine your enjoyment - | |
| cleaning up AI slop - are absolutely a thing to say "fuck off" | |
| over. | |
| linsomniac wrote: | |
| >I don't want to review code the author doesn't understand | |
| | |
| I get that. But the AI tooling when guided by a competent human | |
| can generate some pretty competent code, a lot of it can be | |
| driven entirely through natural language instructions. And | |
| every few months, the tooling is getting significantly more | |
| capable. | |
| | |
| I'm contemplating what exactly it means to "understand" the | |
| code though. In the case of one project I'm working on, it's an | |
| (almost) entirely vibe-coded new storage backend to an existing | |
| VM orchestration system. I don't know the existing code base. I | |
| don't really have the time to have implemented it by hand (or I | |
| would have done it a couple years ago). | |
| | |
| But, I've set up a test cluster and am running a variety of | |
| testing scenarios on the new storage backend. So I understand | |
| it from a high level design, and from the testing of it. | |
| | |
| As an open source maintainer myself, I can imagine (thankfully | |
| I haven't been hit with it myself) how frustrating getting all | |
| sorts of low quality LLM "slop" submissions could be. I also | |
| understand that I'm going to have to review the code coming in | |
| whether or not the author of the submission understands it. | |
| | |
| So how, as developers, do we leverage these tools as | |
| appropriate, and signal to other developers the level of | |
| quality in code. As someone who spent months tracking down | |
| subtle bugs in early Linux ZFS ports, I deeply understand that | |
| significant testing can trump human authorship and review of | |
| every line of code. ;-) | |
| imiric wrote: | |
| > I'm contemplating what exactly it means to "understand" the | |
| code though. | |
| | |
| You can't seriously be questioning the meaning of | |
| "understand"... That's straight from Jordan B. Peterson's | |
| debate playbook which does nothing but devolve the | |
| conversation into absurdism, while making the person sound | |
| smart. | |
| | |
| > I've set up a test cluster and am running a variety of | |
| testing scenarios on the new storage backend. So I understand | |
| it from a high level design, and from the testing of it. | |
| | |
| You understand the system as well as any user could. Your | |
| tests only prove that the system works in specific scenarios, | |
| which may very well satisfy your requirements, but they | |
| absolutely do not prove that you understand how the system | |
| works internally, nor that the system is implemented with a | |
| reliable degree of accuracy, let alone that it's not | |
| misbehaving in subtle ways or that it doesn't have security | |
| issues that will only become apparent when exposed to the | |
| public. All of this might be acceptable for a tool that you | |
| built quickly which is only used by yourself or a few others, | |
| but it's far from acceptable for any type of production | |
| system. | |
| | |
| > As someone who spent months tracking down subtle bugs in | |
| early Linux ZFS ports, I deeply understand that significant | |
| testing can trump human authorship and review of every line | |
| of code. | |
| | |
| This doesn't match my (~20y) experience at all. Testing is | |
| important, particularly more advanced forms like fuzzing, but | |
| it's not a failproof method of surfacing bugs. Tests, like | |
| any code, can itself have bugs, it can test the wrong things, | |
| setup or mock the environment in ways not representative of | |
| real world usage, and most importantly, can only cover a | |
| limited amount of real world scenarios. Even in teams that | |
| take testing seriously, achieving 100% coverage, even for | |
| just statements, is seen as counterproductive and as a fool's | |
| errand. Deeply thorough testing as seen in projects like | |
| SQLite is practically unheard of. Most programmers I've | |
| worked with will often only write happy path tests, if they | |
| bother writing any at all. | |
| | |
| Which isn't to say that code review is the solution. But a | |
| human reviewing the code, building a mental model of how it | |
| works and how it's not supposed to work, can often catch | |
| issues before the code is even deployed. It is at this point | |
| that writing a test is valuable, so that that specific | |
| scenario is cemented in the checks for the software, and | |
| regressions can be avoided. | |
| | |
| So I wouldn't say that testing "trumps" reviews, but that | |
| it's not a reliable way of detecting bugs, and that both | |
| methods should ideally be used together. | |
| linsomniac wrote: | |
| You're right, "trumps" isn't the right word there. But, as | |
| you say, testing is an often neglected part of the process. | |
| There are absolutely issues that code review is going to be | |
| better at finding, particular security related ones. But, | |
| try fixing a subtle bug without a reproducible test case... | |
| ants_everywhere wrote: | |
| This is signed off primarily by RedHat, and they tend to be | |
| pretty serious/corporate. | |
| | |
| I suspect their concern is not so much whether users have own the | |
| copyright to AI output but rather the risk that AI will spit out | |
| code from its training set that belongs to another project. | |
| | |
| Most hypervisors are closed source and some are developed by | |
| litigious companies. | |
| duskwuff wrote: | |
| I'd also worry that a language model is much more likely to | |
| introduce subtle logical errors, potentially ones which violate | |
| the hypervisor's security boundaries - and a user relying | |
| heavily on that model to write code for them will be much less | |
| prepared to detect those errors. | |
| ants_everywhere wrote: | |
| Generally speaking AI will make it easier to write more | |
| secure code. Tooling and automation help a lot with security | |
| and AI makes it easier to write good tooling. | |
| | |
| I would wager good money that in a few years the most | |
| security-focused companies will be relying heavily on AI | |
| somewhere in their software supply chain. | |
| | |
| So I don't think this policy is about security posture. No | |
| doubt human experts are reviewing the security-relevant | |
| patches anyway. | |
| tho23i4234324 wrote: | |
| I'd doubt this very much - LLMs hallucinate API calls and | |
| commit all sorts of subtle errors that you need to catch | |
| (esp. if you're on proprietary problems which it's not | |
| trained on). | |
| | |
| It's a good replacement for Google, but probably nothing | |
| close to what it's being hyped out to be by the capital | |
| allocators. | |
| OtherShrezzing wrote: | |
| While LLMs are really good at generating content, one of | |
| their key weaknesses is their (relative) inability to | |
| detect _missing_ content. | |
| | |
| I'd argue that the most impactful software security bugs in | |
| the last couple of decades (Heartbleed etc) have been | |
| errors of omission, rather than errors of inclusion. | |
| | |
| This means LLMs are: | |
| | |
| 1) producing lots more code to be audited | |
| | |
| 2) poor at auditing that code for the most impactful class | |
| of bugs | |
| | |
| That feels like a dangerous combination. | |
| guappa wrote: | |
| > Generally speaking AI will make it easier to write more | |
| secure code | |
| | |
| In my personal experience, not at all. | |
| latexr wrote: | |
| > Generally speaking AI will make it easier to write more | |
| secure code. | |
| | |
| https://www.backslash.security/press-releases/backslash- | |
| secu... | |
| duskwuff wrote: | |
| Heh. Yup. And I'd be _especially_ concerned about code | |
| written for QEMU, as it 's an unusual type of | |
| application. There's lots of example code and other | |
| writings about security in web applications which a | |
| language model is likely to have encountered in its | |
| training; hypervisors are much less frequently discussed. | |
| blibble wrote: | |
| > but rather the risk that AI will spit out code from its | |
| training set that belongs to another project. | |
| | |
| this is everything that it spits out | |
| ants_everywhere wrote: | |
| This is an uninformed take | |
| Groxx wrote: | |
| It is a _legally untested_ take | |
| otabdeveloper4 wrote: | |
| No, this is an uninformed take. | |
| golergka wrote: | |
| When model trained on trillions of lines of code knows that | |
| inside of a `try` block, tokens `logger` and `.` have a high | |
| probability of being followed by `error` token, but almost | |
| zero probability of being followed by `find` token, which | |
| project does this belong to? | |
| Art9681 wrote: | |
| This is a "BlockBuster laughs Netflix out of the room" moment. I | |
| am a huge fan of QEMU and used it throughout my career. The | |
| maintainers have every right to govern their project as they see | |
| fit. But this is a lot of mental gymnastics to justify clinging | |
| to punchcards in a world where we now have magnetic tape and | |
| keyboards to do things faster. This tech didn't spawn weeks ago. | |
| Every major project has had at least two years to prepare for | |
| this moment. | |
| | |
| Pull your pants up. | |
| 9283409232 wrote: | |
| You're so dramatic. Like they said in the declaration, these | |
| are the early days of AI development and all the problems they | |
| mention will be eventually resolved so they have no problem | |
| taking a backseat while things sort themselves out and I | |
| respect that choice. | |
| add-sub-mul-div wrote: | |
| > This is a "BlockBuster laughs Netflix out of the room" moment | |
| | |
| I'm not sure that's the dunk you think it is. Good for Netflix | |
| for making money, but we're drowning in their empty slop | |
| content now and worse off for it. | |
| danielbln wrote: | |
| Who is forcing you to watch slop? And mind you, there was a | |
| TON of garbage at any local Blockbuster back in the day, with | |
| the added joy of having to go somewhere to rent it, being | |
| slapped with late and rewind fees or not even have | |
| availability of what you want to watch. | |
| | |
| Choice is good. It means more slop, but also more gold. | |
| Figure out how to find the gold. | |
| catlifeonmars wrote: | |
| 2 years isn't that long. It took the Linux kernel 10 years to | |
| start accepting code written in Rust. This isn't quite the same | |
| as the typical frontend flavor-of-the week JavaScript library. | |
| benlivengood wrote: | |
| Open source and libre/free software are particularly vulnerable | |
| to a future where AI-generated code is ruled to be either | |
| infringing _or_ public domain. | |
| | |
| In the former case, disentangling AI-edits from human edits could | |
| tie a project up in legal proceedings for years and projects | |
| don't have any funding to fight a copyright suit. Specifically, | |
| code that is AI-generated and subsequently modified or | |
| incorporated in the rest of the code would raise the question of | |
| whether subsequent human edits were non-fair-use derivative | |
| works. | |
| | |
| In the latter case the license restrictions no longer apply to | |
| portions of the codebase raising similar issues from derived | |
| code; a project that is only 98% OSS/FS licensed suddenly has | |
| much less leverage in takedowns to companies abusing the license | |
| terms; having to prove that infringers are definitely using the | |
| human-generated and licensed code. | |
| | |
| Proprietary software is only mildly harmed in either case; it | |
| would require speculative copyright owners to disassemble their | |
| binaries and try to make the case that AI-generated code | |
| infringed without being able to see the codebase itself. And | |
| plenty of proprietary software has public domain code in it | |
| already. | |
| deadbabe wrote: | |
| If a software is truly wide open source in the sense of "do | |
| whatever the fuck you want with this code, we don't care", then | |
| it has nothing to fear from AI. | |
| candiddevmike wrote: | |
| Won't apply to closed source, not public code, which the GPL | |
| (QEMU uses) is quite good at ensuring becomes open source... | |
| kgwxd wrote: | |
| Can't release someone else's proprietary source under a "do | |
| whatever the fuck you want" license and actually do whatever | |
| the fuck you want, without getting sued. | |
| deadbabe wrote: | |
| It'd be like trying to squeeze blood from a stone | |
| clipsy wrote: | |
| It'd be like trying to squeeze blood from every single | |
| entity using the offending code, actually. | |
| CursedSilicon wrote: | |
| It's incredible watching someone who has no idea what | |
| they're talking about boast so confidently about what | |
| people "can" or "can't" do | |
| iechoz6H wrote: | |
| You can do that but the fact you don't get sued is more | |
| luck than judgement. | |
| rzzzt wrote: | |
| The license does exist so you can release your own software | |
| under it, however: https://en.wikipedia.org/wiki/WTFPL | |
| TeMPOraL wrote: | |
| Only more reason for OSS to _embrace_ AI generation - once | |
| it leaks into enough widely used or critical (think cURL) | |
| dependencies and exceeds certain critical mass, any | |
| judgement on the IP aspects other than "public domain" (in | |
| the broader sense) will become infeasible, as enforcing a | |
| different judgement would be like doing open heart surgery | |
| on the global economy. | |
| windward wrote: | |
| That's the situation we're already in with copyleft | |
| licences but legal teams still treat them like the | |
| plague. | |
| behringer wrote: | |
| Open source is about sharing the source code. You generally | |
| need to force companies to share their source code derived | |
| from your project, or else companies will simply take it, | |
| modify it, and never release their changes,and charge for it | |
| too. | |
| TeMPOraL wrote: | |
| Sharing is caring, being forced to share does not foster | |
| care. | |
| | |
| Companies don't care, so if you release something as open | |
| source that's relevant to them, "companies will simply take | |
| it, modify it, and never release their changes,and charge | |
| for it too" - but _that is what companies do_ , that is | |
| their very nature, and you knew that when you first opened | |
| the source. | |
| | |
| You also knew that when you picked a license, and it's a | |
| major reason for the particular choice you made. Want to | |
| force companies to share? _Pick GPL_. | |
| | |
| If you decide to yoke a dragon, and it instead snatches | |
| your shiny lure and flies away to its cave, you don't get | |
| to complain that the dragon isn't playing nice and doesn't | |
| want to become your beast of burden. If you picked MIT as | |
| your license, _that 's on you_. | |
| zer00eyz wrote: | |
| > or public domain | |
| | |
| https://news.artnet.com/art-world/ai-art-us-copyright-office... | |
| | |
| https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput... | |
| | |
| Im pretty sure that this ship has sailed. | |
| raincole wrote: | |
| It's sailed, but towards the other way: | |
| https://www.bbc.com/news/articles/cg5vjqdm1ypo | |
| fc417fc802 wrote: | |
| That's a brand new ongoing lawsuit. The ship hasn't sailed | |
| in either direction yet. It hasn't even been clearly | |
| established if Midjourney has liability let alone where the | |
| bounds for such liability might lie. | |
| | |
| Remember, anyone can attempt to sue anyone for anything at | |
| any time in a functional system. How far the suit makes it | |
| is a different matter. | |
| zer00eyz wrote: | |
| https://www.wired.com/story/ai-art-copyright-matthew-allen/ | |
| | |
| https://www.cnbc.com/2025/03/19/ai-art-cannot-be- | |
| copyrighted... | |
| | |
| Here are cases where the product of AI/ML are not the | |
| products of people and not capable of being copyrighted. | |
| These are about the OUTPUT being unable to be copyrighted. | |
| gwd wrote: | |
| On the contrary. IANAL, but this is my understanding of the | |
| law (setting aside the "work for hire" thing for | |
| simplicity) | |
| | |
| 1. If you come up with something completely new, you are | |
| the sole copyright holder. | |
| | |
| 2. If you take someone else's copyrighted work and | |
| transform it, then _both of you_ have a copyright on the | |
| derivative work. | |
| | |
| So if you write a brand new comic book that includes Darth | |
| Vader, you can't sell that without Disney's permission [1]: | |
| they have a copyright on Darth Vader, and so your comic | |
| book is partly copyrighted by them. But at the same time, | |
| _they_ can 't sell it without _your_ permission, because | |
| _you_ have a copyright on the comic book too. | |
| | |
| In the case of Midjourney outputs, my understanding of the | |
| current state of the law is this: | |
| | |
| 1. Only humans can create copyrights | |
| | |
| 2. So if Midjourney creates an entirely new image that's | |
| not derivative of anyone else's work (as defined by long- | |
| established copyright law on derivative works), then | |
| _nobody_ owns the copyright, and it 's in the public domain | |
| | |
| 3. If Midjourney creates an image that _is_ derived from | |
| someone else 's work (as defined by long established | |
| copyright law on derivative works), then _only_ Disney has | |
| a copyright on that derivative work. | |
| | |
| And so, in theory, Disney could distribute Darth Vader | |
| images _you_ made with Midjourney, unless you can convince | |
| the court that you had enough creative influence over them | |
| to warrant a copyright. | |
| | |
| [1] Yes of course fair use, trying to make a point here | |
| andreasmetsala wrote: | |
| Doesn't this also mean that if you transform the work | |
| created by Midjourney, you now have a copyright on the | |
| transformed work? | |
| | |
| I wonder what counts for transformed, is a filter enough | |
| or does it have to be more than that? | |
| gwd wrote: | |
| That's my understanding, yes. "What counts as | |
| transformed" is fuzzy, but it's an old well-established | |
| problem with hundreds of years of case law. | |
| jssjsnj wrote: | |
| QEMU: Define policy forbidding use of AI code generators | |
| AJ007 wrote: | |
| I understand what experienced developers don't want random AI | |
| contributions from no-knowledge "developers" contributing to a | |
| project. In any situation, if a human is review AI code line by | |
| line that would tie up humans for years, even ignoring anything | |
| legally. | |
| | |
| #1 There will be no verifiable way to prove something was AI | |
| generated beyond early models. | |
| | |
| #2 Software projects that somehow are 100% human developed will | |
| not be competitive with AI assisted or written projects. The | |
| only room for debate on that is an apocalypse level scenario | |
| where humans fail to continue producing semiconductors or | |
| electricity. | |
| | |
| #3 If a project successfully excludes AI contributions (not | |
| clear how other than controlling contributions to a tight group | |
| of anti-AI fanatics), it's just going to be cloned, and the | |
| clones will leave it in the dust. If the license permits | |
| forking then it could be forked too, but cloning and purging | |
| any potential legal issues might be preferred. | |
| | |
| There still is a path for open source projects. It will be | |
| different. There's going to be much, much more software in the | |
| future and it's not going to be all junk (although 99% might.) | |
| Eisenstein wrote: | |
| If AI can generate software so easily and which performs the | |
| expected functions, why do we even need to know that it did | |
| so? Isn't the future really just asking an AI for a result | |
| and getting that result? The AI would be writing all sorts of | |
| bespoke code to do the thing we ask, and then discard it | |
| immediately after. That is what seems more likely, and not | |
| 'so much software we have to figure out rights to'. | |
| amake wrote: | |
| > #2 Software projects that somehow are 100% human developed | |
| will not be competitive with AI assisted or written projects | |
| | |
| Still waiting to see evidence of AI-driven projects eating | |
| the lunch of "traditional" projects. | |
| viraptor wrote: | |
| It's happening slowly all around. It's not obvious because | |
| people producing high quality stuff have no incentive at | |
| all to mark their changes as AI-generated. But there are | |
| also local tools generated faster than you could adjust | |
| existing tools to do what you want. I'm running 3 things | |
| now just for myself that I generated from scratch instead | |
| of trying to send feature requests to existing apps I can | |
| buy. | |
| | |
| It's only going to get more pervasive from now on. | |
| alganet wrote: | |
| Can you show these 3 things to us? | |
| WD-42 wrote: | |
| For some reason these fully functional ai generated | |
| projects that the authors vibe out while playing guitar | |
| and clipping their toenails are never open source. | |
| dcow wrote: | |
| Except this one is (see your sibling). | |
| fc417fc802 wrote: | |
| > the authors vibe out while playing guitar and clipping | |
| their toenails | |
| | |
| I don't think anyone is claiming that. If you submit | |
| changes to a FOSS project and an LLM assisted you in | |
| writing them how would anyone know? Assuming at least | |
| that you are an otherwise competent developer and that | |
| you carefully review all code before you commit it. | |
| | |
| The (admittedly still controversial) claim being made is | |
| that developers with LLM assistance are more productive | |
| than those without. Further, that there is little | |
| incentive for such developers to advertise this | |
| assistance. Less trouble for all involved to represent it | |
| as 100% your own unassisted work. | |
| EGreg wrote: | |
| Why would you need to carefully review code? That is so | |
| 2024. You're bottlenecking the process and are at a | |
| disadvantage when the AI could be working 24/7. We have | |
| AI agents that have been trained to review thousands of | |
| PRs that are produced by other, generative agents, and | |
| together they have already churned out much more software | |
| than human teams can write in a year. | |
| | |
| AI "assistance" is a short intermediate phase, like the | |
| "centaurs" that Garry Kasparov was very fond of (human + | |
| computer beat both a human and a computer by itself... | |
| until the computer-only became better). | |
| | |
| https://en.wikipedia.org/wiki/Advanced_chess | |
| amake wrote: | |
| > We have AI agents that have been trained to review | |
| thousands of PRs that are produced by other, generative | |
| agents, and together they have already churned out much | |
| more software than human teams can write in a year. | |
| | |
| Was your comment tongue-in-cheek? If not, where is this | |
| huge mass of AI-generated software? | |
| rvnx wrote: | |
| All around you, just that it doesn't make sense for | |
| developers to reveal that a lot of their work is now | |
| about chunking and refining the specifications written by | |
| the product owner. | |
| | |
| Admitting such is like admitting you are overpaid for | |
| your job, and that a 20 USD AI-agent can do better and | |
| faster than you for 75% of the work. | |
| | |
| Is it easy to admit that you have learnt skills for 10+ | |
| years that are progressively already getting replaced by | |
| a machine ? (like thousands of jobs in the past). | |
| | |
| More and more, developer is going to be a monkey job | |
| where your only task is to make sure there is enough coal | |
| in the steam machine. | |
| | |
| Compilers destroyed the jobs of developers writing | |
| assembler code, they had to adapt. They insisted that | |
| hand-written assembler was better. | |
| | |
| Here is the same, except you write code in natural | |
| language. It may not be optimal in all situations but it | |
| often gets the job done. | |
| bonzini wrote: | |
| Good luck debugging | |
| TeMPOraL wrote: | |
| You don't _debug_ AI-generated code - you throw the | |
| problematic chunk away and have AI write it again, and if | |
| that doesn 't help, you repeat the process, possibly with | |
| larger chunks. | |
| | |
| Okay, not in every case, but in many, and that's where | |
| we're headed. The reason is _economics_ - i.e. the same | |
| reason approximately no one in the West repairs their | |
| clothes or appliances; they just throw the damaged thing | |
| away and buy a new one. Human labor is expensive, | |
| automated production is cheap - even more so in digital | |
| space. | |
| alganet wrote: | |
| You don't throw away dams, bridges, factories, | |
| submarines, planes. | |
| | |
| There is a lot of man made stuff you just cannot easily | |
| replace. Instead, we maintain it. | |
| | |
| Remember, _this is not about you_. The post is about | |
| qemu. | |
| | |
| I would argue that qemu is analogous to one of these | |
| pieces of infrastructure. There is only a handful of | |
| powerful virtual machines. These are _not_ easily | |
| replaceable commodities. | |
| amake wrote: | |
| > All around you, just that it doesn't make sense for | |
| developers to reveal that | |
| | |
| OK, but I asked for evidence and people just keep not | |
| providing any. | |
| | |
| "God is all around you; he just works in mysterious ways" | |
| | |
| OK, good luck with that. | |
| rvnx wrote: | |
| Billions of people believe in god(s). In fact, 75 to 85% | |
| of the world population, btw. | |
| amake wrote: | |
| And? | |
| fc417fc802 wrote: | |
| Obviously it's the basis for a religion. We're to have | |
| faith in the ability of LLMs. To ask for evidence of that | |
| is to question the divine. You can ask a model itself for | |
| the relevant tenants pertaining to any given situation. | |
| latexr wrote: | |
| And not that long ago, the majority of the population | |
| believed the Earth is flat, and that cigarettes are good | |
| for your health. Radioactive toys were being sold to | |
| children. | |
| | |
| Wide belief does not equal truth. | |
| alganet wrote: | |
| Billions of people _say_ they believe in god. It's very | |
| different. | |
| | |
| -- | |
| | |
| When you analyze church attendance, it drops to roughly | |
| 50% instead of 85% of the population: | |
| | |
| https://en.wikipedia.org/wiki/Church_attendance#Demograph | |
| ics | |
| | |
| If you start to investigate many aspects of religious | |
| belief, like how many christians read the bible, the | |
| numbers drop drastically to less than 15% | |
| | |
| https://www.statista.com/statistics/299433/bible- | |
| readership-... | |
| | |
| This demonstrates that we cannot rely on self-reporting | |
| to understand religious belief. In practice, most people | |
| are closer to atheists than believers. | |
| fc417fc802 wrote: | |
| That's rather silly. Neither of those things is a | |
| requirement for belief. | |
| alganet wrote: | |
| You can believe all you want, but practice is what | |
| actually matters. | |
| | |
| It's the same thing with AI. | |
| throwawayoldie wrote: | |
| Reality is not a matter decided by majority vote. | |
| alganet wrote: | |
| I have a complete proof that P=NP but it doesn't make | |
| sense to reveal to the world that now I'm god. It would | |
| crush their little hearts. | |
| ben_w wrote: | |
| P = NP is less "crush their little hearts", more "may | |
| cause widespread heart attacks across every industry due | |
| to cryptography failing, depending on if the polynomial | |
| exponent is small enough". | |
| Dylan16807 wrote: | |
| A very very big if. | |
| | |
| Also a sufficiently good exponential solver would do the | |
| same thing. | |
| latexr wrote: | |
| > Assuming at least that you are an otherwise competent | |
| developer and that you carefully review all code before | |
| you commit it. | |
| | |
| That is a big assumption. If everyone were doing that, | |
| this wouldn't be a major issue. But as the curl developer | |
| has noted, people are using LLMs without thinking and | |
| wasting everyone's time and resources. | |
| | |
| https://www.linkedin.com/posts/danielstenberg_hackerone- | |
| curl... | |
| | |
| I can attest to that. Just the other day I got a bug | |
| report, clearly written with the assistance of an LLM, | |
| for software which has been stable and used in several | |
| places for years. This person, when faced with an error | |
| on their first try, instead of pondering "what am I doing | |
| wrong" instead opened a bug report with a "fix". Of | |
| course, they were using the software wrong. They did not | |
| follow the very short and simple instructions and | |
| essentially invented steps (probably suggested by an LLM) | |
| that caused the problem. | |
| | |
| Waste of time for everyone involved, and one more notch | |
| on the road to causing burnout. Some of the worst kind of | |
| users are those who think "bug" means "anything which | |
| doesn't immediately behave the way I thought it would". | |
| LLMs empower them, to the detriment of everyone else. | |
| fc417fc802 wrote: | |
| Sure I won't disagree that those people also exist but I | |
| don't think that's who the claim is being made about. | |
| Pointing out that subpar developers exist doesn't refute | |
| that good ones exist. | |
| bredren wrote: | |
| Mine is. And it is awesome: | |
| https://github.com/banagale/FileKitty | |
| | |
| The most recent release includes a MacOS build in a dmg | |
| signed by Apple: | |
| https://github.com/banagale/FileKitty/releases/tag/v0.2.3 | |
| | |
| I vibed that workflow just so more people could have | |
| access to this tool. It was a pain and it actually took | |
| time away from toenail clipping. | |
| | |
| And while I didn't lay hands on a guitar much during this | |
| period, I did manage to build this while bouncing between | |
| playing Civil War tunes on a 3D-printed violin and | |
| generating music in Suno for a soundtrack to "Back on | |
| That Crust," the missing and one true spiritual successor | |
| to ToeJam & Earl: https://suno.com/song/e5b6dc04-ffab-431 | |
| 0-b9ef-815bdf742ecb | |
| fingerlocks wrote: | |
| This app is concatenating files with an extra line of | |
| metadata added? You know this could be done in a few | |
| lines of shell script? You can then make it a finder | |
| action extension so it's part of the system file manager | |
| app. | |
| pwm wrote: | |
| Sic transit gloria mundi | |
| bredren wrote: | |
| The parent claim was that devs don't open-source their | |
| personal AI tools. FileKitty is mine and it is MIT- | |
| licensed on GitHub. | |
| | |
| It began as an experiment in AI-assisted app design and a | |
| cross-platform "cat these files" utility. | |
| | |
| Since then it has picked up: | |
| | |
| - Snapshot history (and change flags) for any file | |
| selection | |
| | |
| - A rendered folder tree that LLMs can digest, with per- | |
| prompt ignore filters | |
| | |
| - String-based ignore rules for both tree and file | |
| output, so prompts stay surgical | |
| | |
| My recent focus is making that generated context modular, | |
| so additional inputs (logs, design docs, architecture | |
| notes) can plug in cleanly. Apple's new on-device | |
| foundation models could pair nicely with that. | |
| | |
| The bigger point: most AI tooling hides the exact nature | |
| of context. FileKitty puts that step in the open and | |
| keeps the programmer in the loop. | |
| | |
| I continue to believe LLMs can solve big problems with | |
| appropriate context and that intentionality in context | |
| prep is important step in evaluating ideas and | |
| implementation suggestions found in LLM outputs. | |
| | |
| There's a Homebrew build available and I'd be happy to | |
| take contributions: https://github.com/banagale/FileKitty | |
| brulard wrote: | |
| man, the icon is beautiful! | |
| TeMPOraL wrote: | |
| Going by the standard of "But there are also local tools | |
| generated faster than you could adjust existing tools to | |
| do what you want", here's a random one of mine that's in | |
| regular use by my wife: | |
| | |
| https://github.com/TeMPOraL/qr-code-generator | |
| | |
| Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro | |
| (I forgot to note that down in this project), and | |
| recently modified with Claude Code because I had to test | |
| it on _something_. | |
| | |
| Getting the first version of this up was literally both | |
| faster and easier than finding a QR code generator that | |
| I'm sure is not bloated, not bullshit, not loaded with | |
| trackers, that's not using shorteners or its own URL | |
| (it's always a stupid idea to use URL shorteners you | |
| don't control), not showing ads, mining bitcoin and shit, | |
| one that my wife can use in her workflow without being | |
| distracted too much. Static page, domain I own, a bit of | |
| fiddling with LLMs. | |
| | |
| What I can't link to is half a dozen single-use tools or | |
| faux tools created on the fly as part of working on | |
| something. But this happens to me couple times a month. | |
| | |
| To anchor another vertex in this parameter space, I found | |
| it _easier and faster_ to ask LLM to build me a | |
| "breathing timer" (one that counts down N seconds and | |
| resets, repeatedly) with analog indicator by requesting | |
| it, because a search query to Google/Kagi would be of | |
| comparable length, and then I'd have to click on results! | |
| | |
| EDIT: Okay, another example: | |
| | |
| https://github.com/TeMPOraL/tampermonkey- | |
| scripts/blob/master... | |
| | |
| It overlays a trivial UI to set up looping over a segment | |
| of any YouTube video, and automatically persists the | |
| setting by video ID. It solves the trivial annoyance of | |
| channel jingles and other bullshit at start/end of videos | |
| that I use repeatedly as background music. | |
| | |
| This was mostly done zero-shot by Claude, with maybe two | |
| or three requests for corrections/extra features, total | |
| development time maybe 15 minutes. I use it every day all | |
| the time ever since. | |
| | |
| You could say, "but SponsorBlock" or whatever, but per | |
| what GP wrote, I just needed a small fraction of | |
| functionality of the tools I know exist, and it was | |
| trivial to generate that with AI. | |
| alganet wrote: | |
| Your QR generator is actually a project written by humans | |
| repackaged: | |
| | |
| https://github.com/neocotic/qrious | |
| | |
| All the hard work was made by humans. | |
| | |
| I can do `npm install` without having to pay for AI, | |
| thanks. | |
| ben_w wrote: | |
| I am reminded of a meme about musicians. Not well enough | |
| to find it, but it was something like this: | |
| Real musicians don't mix loops they bought. Real | |
| musicians make their own synth patches. Real | |
| musicians build their own instruments. Real | |
| musicians hand-forge every metal component in their | |
| instruments. ... They say real musicians | |
| raise goats for the leather for the drum-skins, but I | |
| wouldn't know because I haven't made any music in months | |
| and the goats smell funny. | |
| | |
| There's two points here: | |
| | |
| 1) even though most of people on here know what npm is, | |
| many of us are not web developers and don't really know | |
| how to turn a random package into a useful webapp. | |
| | |
| 2) The AI is faster than googling a finished product that | |
| already exists, not just as an NPM package, but as a | |
| complete website. | |
| | |
| Especially because search results require you to go | |
| through all the popups everyone stuffs everywhere because | |
| cookies, ads, before you even find out if it was actually | |
| a scam where the website you went to first doesn't | |
| actually do the right thing (or perhaps *anything*) | |
| anyway. | |
| | |
| It is also, for many of us, the same price: free. | |
| latexr wrote: | |
| > I am reminded of a meme about musicians. Not well | |
| enough to find it | |
| | |
| You only need to search for "loops goat skin". You're | |
| butchering the quote and its meaning quite a bit. The | |
| widely circulated version is: | |
| | |
| > I thought using loops was cheating, so I programmed my | |
| own using samples. I then thought using samples was | |
| cheating, so I recorded real drums. I then thought that | |
| programming it was cheating, so I learned to play drums | |
| for real. I then thought using bought drums was cheating, | |
| so I learned to make my own. I then thought using premade | |
| skins was cheating, so I killed a goat and skinned it. I | |
| then thought that that was cheating too, so I grew my own | |
| goat from a baby goat. I also think that is cheating, but | |
| I'm not sure where to go from here. I haven't made any | |
| music lately, what with the goat farming and all. | |
| | |
| It's not about "real musicians"1 but a personal | |
| reflection on dependencies and abstractions and the | |
| nature of creative work and remixing. Your interpretation | |
| of it is backwards. | |
| | |
| 1 https://en.wikipedia.org/wiki/No_true_Scotsman | |
| alganet wrote: | |
| Ice Ice Baby getting the bass riff of Under Pressure is | |
| sampling. Making a cover is covering. Milli Vanilli is | |
| another completely different situation. | |
| | |
| I am sorry, none of your points are made. Makes no sense. | |
| | |
| The LLM work sounds dumb, and the suggestion that it made | |
| "a qr code generator" is disingenuous. The LLM barely did | |
| a frontend for it. Barely. | |
| | |
| Regarding the "free" price, read the comment I replied on | |
| again: | |
| | |
| > Built with Aider and either Sonnet 3.5 or Gemini 2.5 | |
| Pro | |
| | |
| Paid tools. | |
| | |
| It sounds like the author payed for `npm install`, and | |
| thinks he's on top of things and being smart. | |
| ben_w wrote: | |
| > The LLM work sounds dumb, and the suggestion that it | |
| made "a qr code generator" is disingenuous. The LLM | |
| barely did a frontend for it. Barely. | |
| | |
| Yes, and? | |
| | |
| The goal wasn't "write me a QR library" it was "here's my | |
| pain point, solve it". | |
| | |
| > It sounds like the author payed for `npm install`, and | |
| thinks he's on top of things and being smart. | |
| | |
| I can put this another way if you prefer: | |
| Running `npm install qrious`: trivial. Knowing | |
| qrious exists and how to integrate it into a page: | |
| expensive. | |
| | |
| https://www.snopes.com/fact-check/know-where-man/ | |
| | |
| > > Built with Aider and either Sonnet 3.5 or Gemini 2.5 | |
| Pro | |
| | |
| > Paid tools. | |
| | |
| I get Sonnet 4 for free at https://claude.ai -- I know | |
| version numbers are weird in this domain, but I kinda | |
| expect that means Sonnet 3.5 was free at some point? Was | |
| it not? I mean, 3.7 is also a smaller version number but | |
| listed as "pro", so IDK... | |
| | |
| Also I get Gemini 2.5 Pro for free at | |
| https://aistudio.google.com | |
| | |
| Just out of curiosity, I've just tried using Gemini 2.5 | |
| Pro (for free) myself to try this. The result points to a | |
| CDN of qrcodejs, which I assume is this, but don't know | |
| my JS libraries so can't confirm this isn't just two | |
| different ones with the same name: | |
| https://github.com/davidshimjs/qrcodejs | |
| | |
| My biggest issue with this kind of thing in coding is the | |
| same as my problem with libraries in general: you're | |
| responsible for the result even if you don't read what | |
| the library (/AI) is doing. So, I expect some future | |
| equivalent of the npm left-pad incident -- memetic | |
| monoculture, lots of things fail at the same time. | |
| alganet wrote: | |
| > Knowing qrious exists and how to integrate it into a | |
| page: expensive. | |
| | |
| qrious literally has it integrated already: | |
| | |
| https://github.com/davidshimjs/qrcodejs/blob/master/index | |
| .ht... | |
| | |
| I see many issues. The main one is that none of this is | |
| relevant to the qemu discussion. It's on another whole | |
| level of project. | |
| | |
| I kind of regret asking the poor guy to show his stuff. | |
| None of these tutorial projects come even close to what | |
| an AI contribution to qemu would look like. It's | |
| pointless. | |
| ben_w wrote: | |
| The very first part of the quotation is "Knowing qrious | |
| exists". | |
| | |
| So the fact they've already got the example is great if | |
| you do in fact already have that knowledge, _and | |
| *completely useless* if you don 't_. | |
| | |
| > I kind of regret asking the poor guy to show his stuff. | |
| None of these tutorial projects come even close to what | |
| an AI contribution to qemu would look like. It's | |
| pointless. | |
| | |
| For better and worse, I suspect it's _very much_ the kind | |
| of thing AI would contribute. | |
| | |
| I also use it for things, and it's... well, I _have_ seen | |
| worse code from real humans, but I don 't think highly of | |
| those humans' coding skills. The AI I've used so far are | |
| solidly at the quality level of "decent for a junior | |
| developer", not more, not less. Ridiculously broad | |
| knowledge (which is why that quality level is even | |
| useful), but that quality level. | |
| | |
| Use it because it's cheap or free, when that skill level | |
| is sufficient. Unless there's a legal issue, which there | |
| is for qemu, in which case don't. | |
| TeMPOraL wrote: | |
| Person in question here. | |
| | |
| I didn't know qrious exist. Last time I checked for | |
| frontend-only QR code generators myself, pre-AI, I | |
| couldn't find anything useful. I don't do frontend work | |
| daily, I'm not on top of the garbagefest the JS | |
| environment is. | |
| | |
| Probably half the win applying AI to this project was | |
| that it a) discovered qrious for me, and b) made me a | |
| working example frontend, in less time than it would take | |
| me to find the library myself among sea of noise. | |
| | |
| 'ben_w is absolutely correct when he wrote: | |
| | |
| > _The goal wasn 't "write me a QR library" it was | |
| "here's my pain point, solve it"._ | |
| | |
| And: <quote> Running `npm install | |
| qrious`: trivial. Knowing qrious exists and how to | |
| integrate it into a page: expensive. </quote> | |
| | |
| This is precisely what it was. I built this in between | |
| other stuff, paying half attention to it, to solve an | |
| immediate need my wife had. The only thing I cared about | |
| it here is that: | |
| | |
| 1. It worked and was trivial to use | |
| | |
| 2. Was 100% under my control, to guarantee no tracking, | |
| telemetry, ads, crypto miners, and other usual web | |
| dangers, are present, and ensure they never are going to | |
| be present. | |
| | |
| 3. It had no build step whatsoever, and minimal | |
| dependencies that could be vendored, because again, _I | |
| don 't do webshit for a living_ and don't have time for | |
| figuring out this week's flavor of building "Hello world" | |
| in Node land. | |
| | |
| (Incidentally, I'm using Claude Code to build something | |
| bigger using a web stack, which forced me to figure out | |
| the current state of tooling, and believe me, it's not | |
| much like what I saw 6 months ago, and nothing like what | |
| I saw a year ago.) | |
| | |
| 2 and 3 basically translate to "I don't want to _ever_ | |
| think about it again ". Zero ops is my principle :). | |
| | |
| ---- | |
| | |
| > _I see many issues. The main one is that none of this | |
| is relevant to the qemu discussion. It 's on another | |
| whole level of project._ | |
| | |
| It was relevant to the topic discussed in this subthread. | |
| Specifically about the statement: | |
| | |
| > _But there are also local tools generated faster than | |
| you could adjust existing tools to do what you want. I 'm | |
| running 3 things now just for myself that I generated | |
| from scratch instead of trying to send feature requests | |
| to existing apps I can buy._ | |
| | |
| The implicit point of larger importance is: AI | |
| contributions may not show up fully polished in OSS | |
| repos, but making it possible to do throwaway tools to | |
| address pain points directly provides advantages _that | |
| compound_. | |
| | |
| And my examples are just concrete examples of projects | |
| that were AI generated with a mindset of "solve this pain | |
| point" and not "build a product", and _making them took | |
| less time and effort than my participation in this | |
| discussion already did_. | |
| alganet wrote: | |
| Cool, makes sense. | |
| | |
| Since you're here, I have another question relevant to | |
| the thread: do you pay for AI tools or are you using them | |
| for free? | |
| TeMPOraL wrote: | |
| TL;DR: I pay, I always try to use SOTA models if I can. | |
| | |
| I pay for them; until last week, this was almost | |
| entirely[0] pay-as-you-go use of API keys via TypingMind | |
| (for chat) and Aider (for coding). The QR code project I | |
| linked was made by Aider. Total cost was around $1 IIRC. | |
| | |
| API options were, until recently, very cheap. Most of my | |
| use was around $2 to $5 per project, sometimes under $2. | |
| I mostly worked with GPT-4, then Sonnet 3.5, briefly with | |
| Deepseek-R1; by the time I got around to testing Claude | |
| Sonnet 3.7, Google released Gemini 2.5 Pro, which was | |
| substantially cheaper, so I stuck to the latter. | |
| | |
| Last week I got myself the Max plan for Anthropic (first | |
| 5x, then the 20x one) specifically for Claude Code, | |
| because using pay-as-you-go pricing with top models in | |
| the new "agentic" way got stupidly expensive; $100 or | |
| $200 per month may sound like a lot, but less so when | |
| taking the API route would have you burn this much in a | |
| day or two. | |
| | |
| -- | |
| | |
| [0] - I have the $20/month "Plus" subscription to | |
| ChatGPT, which I keep because of gpt-4o image generation | |
| and o3 being excellent as my default model for random | |
| questions/problems, many of them not even coding-related. | |
| I could access o3 via API, but this gets stupidly | |
| expensive for casual use; subscription is a better deal | |
| now. | |
| ben_w wrote: | |
| > TL;DR: I pay, I always try to use SOTA models if I can. | |
| | |
| Interesting; I'm finding myself doing the opposite -- I | |
| have API access to at least OpenAI, but all the SOTA | |
| stuff becomes free so fast that I don't expect to lose | |
| much by waiting. | |
| | |
| My OpenAI API credit expired mostly unused. | |
| Philpax wrote: | |
| Here's Armin Ronacher describing his open-source "sloppy | |
| XML" parser that he had AI write with his guidance from | |
| this week: https://lucumr.pocoo.org/2025/6/21/my-first- | |
| ai-library/ | |
| latexr wrote: | |
| > To be clear: this isn't an endorsement of using models | |
| for serious Open Source libraries. This was an experiment | |
| to see how far I could get with minimal manual effort, | |
| and to unstick myself from an annoying blocker. The | |
| result is good enough for my immediate use case and I | |
| also felt good enough to publish it to PyPI in case | |
| someone else has the same problem. | |
| | |
| By their own admission, this is just kind of OK. They | |
| don't even know how good or bad it is, just that it kind | |
| of solved an immediate problem. That's not how you create | |
| sustainable and reliable software. Which is OK, sometimes | |
| you just need to crap something out to do a quick job, | |
| but that doesn't really feel like what your parent | |
| comment is talking about. | |
| irthomasthomas wrote: | |
| My llm-consortium project was vibe coded. Some notes on | |
| how I did that in the announcement tweet if you click | |
| through https://x.com/karpathy/status/1870692546969735361 | |
| viraptor wrote: | |
| Only the simplest one is open (and before you discount it | |
| as too trivial, somehow none of the other ones did what I | |
| wanted) https://github.com/viraptor/pomodoro | |
| | |
| The others are just too specific for me to be useful for | |
| anyone else: an android app for automatic processing of | |
| some text messages and a work scheduling/prioritising | |
| thing. The time to make them generic enough to share | |
| would be much longer than creating my specific version in | |
| the first place. | |
| a57721 wrote: | |
| > and before you discount it as too trivial, somehow none | |
| of the other ones did what I wanted | |
| | |
| No offense, it's really great that you are able to make | |
| apps that do exactly what you want, but your examples are | |
| not very good to show that "software projects that | |
| somehow are 100% human developed will not be competitive | |
| with AI assisted or written projects" (as someone else | |
| suggested above). Complex real world software is | |
| different from pomodoro timers and TODO lists. | |
| viraptor wrote: | |
| Cut it out with patronising, I work with complex | |
| software, which is why I specifically mentioned the only | |
| example I published was simple. | |
| | |
| > but your examples are not very good to show that | |
| "software projects that somehow are 100% human developed | |
| will not be competitive with AI assisted or written | |
| projects" | |
| | |
| Here's the thing though - it's already the case, because | |
| I wouldn't create those tools but hand otherwise. I just | |
| don't have the time, and they're too personal/edge-case | |
| to pay anyone to make them. So the comparison in this | |
| case is between 100% human developed non-existent | |
| software and AI generated project which exists. The | |
| latter wins in every category by default. | |
| Dylan16807 wrote: | |
| I don't think they're being patronizing, it's that | |
| "simple personal app that was barely worth making" is | |
| nice to have but not at all what they want evidence of. | |
| viraptor wrote: | |
| Whether it was worth making is for me to judge since it | |
| is a personal app. It improves my life and work, so yes, | |
| it was very much worth it. | |
| Dylan16807 wrote: | |
| You said you wouldn't have made it if it took longer, | |
| isn't that a barely? | |
| | |
| But either way it's not an example of what they wanted. | |
| a57721 wrote: | |
| My apologies, I didn't want to sound patronizing and | |
| wasn't making assumptions about your work and experience | |
| based on your examples, I am happy that generative AI | |
| allows you to make such apps. However, they are very | |
| similar to the demos that are always presented as | |
| showcases. | |
| fragmede wrote: | |
| > Complex real world software is different from pomodoro | |
| timers and TODO lists. | |
| | |
| Simplistic Pomodoro timer with no features, sure, but a | |
| full blown modern Todo app that syncs to configurable | |
| backend(s), has a website, mobile apps, an electron app, | |
| CLI/TUI, web hooks, other integrations? Add a login | |
| system and allow users to assign todos to each other, and | |
| have todos depend on other todos and visualizations and | |
| it starts looking like JIRA, which is totally complex | |
| real world software. | |
| | |
| The weakness of LLMs is that they can't do anything | |
| that's not in their training data. But they've got so | |
| much training data that say you had a box of Lego bricks | |
| but could only use those bricks to build models. If you | |
| had a brick copier, and one copy of every single brick | |
| type on the Internet, the fact that you couldn't invent | |
| new pieces from scratch would be a limitation, but given | |
| the number of bricks on all the Internet, that covers a | |
| lot of area. Most (but not all) software is some flavor | |
| of CRUD app, and if LLMs could only write every CRUD app | |
| ever that would still be tremendous value. | |
| alganet wrote: | |
| > The time to make them generic enough to share would be | |
| much longer than creating my specific version in the | |
| first place | |
| | |
| Welcome to the reality of software development. "Works on | |
| my machine" is often not good enough to make the cut. | |
| viraptor wrote: | |
| It doesn't matter that my thing doesn't generalise if | |
| someone can build their own customised solution quickly. | |
| But also, if I wanted to sell it or distribute it, I'd | |
| ensure it was more generic from the beginning. | |
| alganet wrote: | |
| You need to put your money where your mouth is. | |
| | |
| If you comment about AI generated code in a thread about | |
| qemu (mission-critical project that many industries rely | |
| upon), a pomodoro app is not going to do the trick. | |
| | |
| And no, it doesn't "show that is possible". qemu is not | |
| only more complex, it's a whole different problem space. | |
| nijave wrote: | |
| Not sure about parent but you could argue Jetbrains fancy | |
| auto complete is AI and generates a substantial portion | |
| of code. It runs using a local model and, in my | |
| experience, does pretty good at guessing the rest of the | |
| line with minimal input (so you could argue 80% of each | |
| line was AI generated) | |
| linsomniac wrote: | |
| Not OP, but: | |
| | |
| I'm getting towards the end of a vibe coded ZFS storage | |
| backend to ganeti that includes the ability to live | |
| migrate VMs to another host by: taking snapshot and | |
| replicating it to target, pausing VM, taking another | |
| incremental snapshot and replicating it, and then | |
| unpausing the VM on the new destination machine. | |
| https://github.com/linsomniac/ganeti/tree/newzfs | |
| | |
| Other LLM tools I've built this week: | |
| | |
| This afternoon I built a web-based SQL query | |
| editor/runner with results display, for dev/ops people to | |
| run read-only queries against our production database. To | |
| replace an existing super simple one, and add query | |
| syntax highlighting, snippet library, and other modern | |
| features. I can probably release this though I'd need to | |
| verify that it won't leak anything. Targets SQL Server. | |
| | |
| A couple CLI Jira tools to pull a list of tickets I'm | |
| working on (with cache so I can get an immediate | |
| response, then get updates after Jira response comes | |
| back), and tickets with tags that indicate I have to | |
| handle them specially. | |
| | |
| An icinga CLI that downtimes hosts, for when we do | |
| sweeping machine maintenances like rebooting a VM host | |
| with dozens of monitored children. | |
| | |
| An Ansible module that is a "swiss army knife" for | |
| filesystem manipulation, merging the functions of copy, | |
| template, file, so you can loop over a list and: create a | |
| directory, template a couple files into it, doing a | |
| notify on one and a when on another, ensure a file exists | |
| if it doesn't already, to reduce duplication of | |
| boilerplate when doing a bunch of file deploys. This I | |
| will release as a ansible galaxy module once I have it | |
| tested a little more. | |
| EGreg wrote: | |
| I vibe-coded my own MySQL-compatible database that | |
| performs better than MariaDB, after my agent optimized it | |
| for 12 hours. It is also a time-traveling DB and performs | |
| better on all benchmarks and the AI says it is completely | |
| byzantine-fault-tolerant. Programmers, you had a nice | |
| run. /s | |
| cess11 wrote: | |
| Looks like two commits: | |
| | |
| https://github.com/linsomniac/ganeti/commit/e91766bfb42c6 | |
| 7ab... | |
| | |
| https://github.com/linsomniac/ganeti/commit/f52f6d689c242 | |
| e3e... | |
| linsomniac wrote: | |
| Thanks, I hadn't pushed from my test cluster, check | |
| again. "This branch is 12 commits ahead of, 4 commits | |
| behind ganeti/ganeti:master" | |
| amake wrote: | |
| None of this seems relevant to the original claim: | |
| "Software projects that somehow are 100% human developed | |
| will not be competitive with AI assisted or written | |
| projects" | |
| | |
| I don't feel like it's meaningful to discuss the | |
| "competitiveness" of a handful of bespoke local or | |
| internal tools. | |
| brulard wrote: | |
| It's like saying "if we discuss professional furniture | |
| making, it's not relevant that you are able to cut, | |
| drill, assemble, glue, paint, finish wood quickly with | |
| good enough quality". | |
| alganet wrote: | |
| All the features you mentioned are not coming from the | |
| AI. | |
| | |
| Here it is invoking the actual zfs commands: | |
| | |
| https://github.com/ganeti/ganeti/compare/master...linsomn | |
| iac... | |
| | |
| All the extra python boilerplate just makes it harder to | |
| understand IMHO. | |
| ziml77 wrote: | |
| I can't imagine they ever even looked at what they | |
| checked in, because it includes code that the LLM was | |
| using to investigate other code. | |
| amake wrote: | |
| > It's not obvious because people producing high quality | |
| stuff have no incentive at all to mark their changes as | |
| AI-generated | |
| | |
| I feel like we'd be hearing from business that crushed | |
| their competition by delivering faster or with fewer | |
| people. Where are those businesses? | |
| | |
| > But there are also local tools generated | |
| | |
| This is really not the same thing as the original claim | |
| ("Software projects that somehow are 100% human developed | |
| will not be competitive with AI assisted or written | |
| projects"). | |
| bredren wrote: | |
| This is happening right now and it won't be obvious until | |
| the liquidity events provide enough cover for victory lap | |
| story telling. | |
| | |
| The very knowledge that an organization is experiencing | |
| hyper acceleration due to its successful adoption of AI | |
| across the enterprise is proprietary. | |
| | |
| There are no HBS case studies about businesses that | |
| successfully established and implemented strategic | |
| pillars for AI because the pillars were likely written in | |
| the past four months. | |
| amake wrote: | |
| > This is happening right now and it won't be obvious | |
| until | |
| | |
| I asked for evidence and, as always, lots of people are | |
| popping out of the woodwork to swear that it's true but I | |
| _can 't_ see the evidence yet. | |
| | |
| OK, then. Good luck with that. | |
| brulard wrote: | |
| Do you think that company success and it's causes are | |
| measurable day by day? I've worked for an industrial | |
| company that completely screwed up their software | |
| development, but their business is rooted so deep into | |
| other businesses, that it would take a decade until the | |
| result emerges. This may be extreme, but for average | |
| business I would expect 2-3 years for these results to be | |
| measurable. Startups may be quicker, but it's extremely | |
| difficult to compare them as every startup is quite | |
| unique. So if you wait for hard evidence, good luck not | |
| missing the train. | |
| TeMPOraL wrote: | |
| > _I feel like we 'd be hearing from business that | |
| crushed their competition by delivering faster or with | |
| fewer people. Where are those businesses?_ | |
| | |
| As if tech part was the major part of getting the product | |
| to market. | |
| | |
| Those businesses are probably everywhere. They just | |
| aren't open about admitting they're using AI to speed up | |
| their marketing/product design/programming/project | |
| management/graphics design, because a) it's not normal | |
| outside some tech startup sphere to brag about how you're | |
| improving your _internal process_ , and b) because | |
| _almost everyone else is doing that too_ , so it | |
| partially cancels out - that is what competition on the | |
| market means, and c) admitting to use of AI in current | |
| climate is kind of a questionable PR move. | |
| | |
| WRT. those who fail to leverage the new tools and are | |
| destined to be outcompeted, this process takes extended | |
| time, because companies have inertia. | |
| | |
| >> _But there are also local tools generated_ | |
| | |
| > _This is really not the same thing as the original | |
| claim_ | |
| | |
| Point is that such wins compound. You get yak shaving | |
| done faster by fashioning your own tools on the fly, and | |
| it also cuts cost and a huge burden of _maintaining | |
| relationships with third parties_ [0] | |
| | |
| -- | |
| | |
| [0] - Because each account you create, each subscription | |
| you take, even each online tool you kinda track and hope | |
| hope hope won't disappear on you - each such case comes | |
| with a cognitive tax of a business relationship you | |
| probably didn't want, that often costs you money | |
| directly, and that you need to keep track of. | |
| amake wrote: | |
| > Those businesses are probably everywhere. They just | |
| aren't open about admitting | |
| | |
| "Where's the evidence?" "Probably everywhere." | |
| | |
| OK, good luck, have fun | |
| TeMPOraL wrote: | |
| Yup. Or, "Just look around!". | |
| amake wrote: | |
| If it was self-evident then I wouldn't need to ask for | |
| evidence. And I imagine you wouldn't need to be waving | |
| your hands making excuses for the lack of evidence. | |
| TeMPOraL wrote: | |
| To me it's self-evident, but is probably one casual step | |
| removed from what you'd like to see. I can't point to | |
| specific finished or released projects that were | |
| substantially accelerated by use of GenAI[0]. But I can | |
| point out that nearly everyone I talked with in the last | |
| year, that does _any_ kind of white-collar job, is either | |
| afraid of LLMs, actively using LLMs at work and finding | |
| them very useful, or both. | |
| | |
| It's not possible for this level of impact at the bottom | |
| to make no change on the net near the top, so I propose | |
| that effects may be delayed and not immediately apparent. | |
| LLMs are still a new thing in business timelines. | |
| | |
| TL;DR: just wait a bit more. | |
| | |
| One thing I can hint at, but can't go into details, is | |
| that I personally know of at least one enterprise-grade | |
| project whose roadmap and scoping - and therefore, | |
| funding - is critically dependent on AI speeding up | |
| significant amount of development and devops tasks by at | |
| least 2-3x; that aspect is understood by both developers, | |
| managers, customers and investors, and not disputed. | |
| | |
| So, again: just wait a little longer. | |
| | |
| -- | |
| | |
| [0] - Except maybe for Aider, whose author always posts | |
| how much of its own code Aider wrote in a given release; | |
| it's usually way above 50%. | |
| ben_w wrote: | |
| > One thing I can hint at, but can't go into details, is | |
| that I personally know of at least one enterprise-grade | |
| project whose roadmap and scoping - and therefore, | |
| funding - is critically dependent on AI speeding up | |
| significant amount of development and devops tasks by at | |
| least 2-3x; that aspect is understood by both developers, | |
| managers, customers and investors, and not disputed. | |
| | |
| Mm. I can now see why, in your other comment, you want to | |
| keep up with the SOTA. | |
| TeMPOraL wrote: | |
| It's actually unrelated. I try to keep up with the SOTA | |
| because if I'm not using the current-best model, then | |
| each time I have a hard time with it or get poor results, | |
| I keep wondering if I'm just wasting my time fighting | |
| with something a stronger model would do without | |
| problems. It's a personal thing; I've been like this ever | |
| since I got API access to GPT-4. | |
| | |
| My use of LLMs isn't all that big, and I don't have any | |
| special early access or anything. It's just that the | |
| tokens are so cheap that, for casual personal and | |
| professional use, the pricing difference didn't matter. | |
| Switching to a stronger model meant that my average | |
| monthly bill went from $2 to $10 or something. These | |
| amounts were immaterial. | |
| | |
| Use patterns and pricing changes, though, and recently | |
| this made some SOTA models (notably o3, gpt-4.5 and the | |
| most recent Opus model) too expensive for my use. | |
| | |
| As for the project I referred to, let's put it this way: | |
| the reference point is what was SOTA ~2-3 months ago | |
| (Sonnet 3.7, Gemini 2.5 Pro). And the assumptions aren't | |
| just wishful thinking - they're based on actual | |
| experience with using these models (+ some tools) to | |
| speed up specific kind of work. | |
| fireflash38 wrote: | |
| Schroedingers AI. It's everywhere, but you can't point to | |
| it cause it's apparently indistinguishable from humans, | |
| except for the shitty AI which is just shitty AI. | |
| | |
| It's a thought terminating cliche. | |
| conartist6 wrote: | |
| And because from the outside everything looks worse than | |
| ever. Worse quality, no more support, established | |
| companies going crazy to cut costs. AI slop is replacing | |
| thoughtful content across the web. Engineering morale is | |
| probably at an all time low for my 20 years watching this | |
| industry... | |
| | |
| So my question is: if so many people should be bragging | |
| to me and celebrating how much better things are, why | |
| does it look to me like they are worse and everyone is | |
| miserable about it...? | |
| TeMPOraL wrote: | |
| I think in context of this discussion you might be | |
| confused about what the term "better" refers to. | |
| | |
| > _And because from the outside everything looks worse | |
| than ever. Worse quality, no more support, established | |
| companies going crazy to cut costs. AI slop is replacing | |
| thoughtful content across the web. Engineering morale is | |
| probably at an all time low for my 20 years watching this | |
| industry._ | |
| | |
| That is true and present across the board. But consider, | |
| all of that is what "better" means to companies, and most | |
| of that is caused by actions that employers call | |
| _success_ and reward employees for. | |
| | |
| Our industry, in particular, is a stellar example - half | |
| of the things we make are making things worse; of the | |
| things that seem to make things better, half of them are | |
| actually making things worse, but it's not visible | |
| because of accounting trickery (e.g. specialized roles | |
| cut is legible to beancounters; the workload being | |
| diffused and dragging everyone else's productivity down | |
| is not). | |
| | |
| So yeah, AI is making things better for its users, but | |
| expect that what's "better" for the industry whose main | |
| product is automating people away from their jobs, is | |
| going to translate to a lot of misery down the line. | |
| guappa wrote: | |
| > They just aren't open about admitting they're using AI | |
| to speed up their marketing/product | |
| design/programming/project management/graphics design | |
| | |
| Sure... they'd hate to get money thrown at them from | |
| investors. | |
| TeMPOraL wrote: | |
| Did you notice that what companies say to investors and | |
| what they say to the public are usually entirely | |
| different things? When they get mixed up - especially | |
| when investor-bound information reaches general public - | |
| it's usually a bad day for the company. | |
| tomjen3 wrote: | |
| You are just not listening to the right places. | |
| | |
| fly.pieter.com made a fortune while he live vide coded it | |
| on Twitter. One made making a modern multiplayer game. | |
| | |
| Or Michael Luo, who got a legal notice after making a | |
| much cheaper app that did the same as docusign | |
| https://analyticsindiamag.com/ai-news-updates/vibe-coder- | |
| get... | |
| | |
| There are others, but if you have found a gold mine, why | |
| would you inform the world? | |
| fragmede wrote: | |
| We'll have to see how it pans out for Cloudflare. They | |
| published an oauth thing and all the prompts used to | |
| create it. | |
| | |
| https://github.com/cloudflare/workers-oauth-provider/ | |
| luqtas wrote: | |
| that's like driving big personal vehicles and having a | |
| bunch of children and eating a bunch of meat and do nothing | |
| about because marine and terrestrial ecosystems weren't | |
| fully destroyed by global warming | |
| lynx97 wrote: | |
| Ahh, there you go, environmental activists outright | |
| saying having children is considered a crime against | |
| nature. Wonderful, you seem to hit a rather bad | |
| stereotype right on the head. What is next? Earth would | |
| be better of if humanity was eradicated? | |
| luqtas wrote: | |
| go inform yourself [0] | |
| | |
| 0: https://iopscience.iop.org/article/10.1088/1748-9326/a | |
| a7541/... | |
| mcoliver wrote: | |
| 80-90% of Claude is now written by Claude | |
| 0x457 wrote: | |
| And whose lunch is it eating? | |
| rvnx wrote: | |
| Your lunch, the developers behind Claude are very rich | |
| and do not need their developer career since they have | |
| enough to retire | |
| amake wrote: | |
| Using AI tools make AI tools is not the impact _outside | |
| of the AI bubble_ that people are looking for. | |
| brahma-dev wrote: | |
| Cigarettes do not cause cancer. | |
| brulard wrote: | |
| Exactly. People cause cancer to themselves by smoking. | |
| ben_w wrote: | |
| How can you tell which project is which? | |
| | |
| I mean, sure, there's plenty of devs who refuse to use AI, | |
| but how many projects rather than individuals are in each | |
| category? | |
| | |
| And is Microsoft "traditional"? I name them specifically | |
| because their CEO claims 20-30% of their new code is AI | |
| generated: https://techcrunch.com/2025/04/29/microsoft-ceo- | |
| says-up-to-3... | |
| blibble wrote: | |
| > #2 Software projects that somehow are 100% human developed | |
| will not be competitive with AI assisted or written projects | |
| | |
| "competitive", meaning: "most features/lines of code emitted" | |
| might matter to a PHB or Microsoft | |
| | |
| but has never mattered to open source | |
| alganet wrote: | |
| Quoting them: | |
| | |
| > The policy we set now must be for today, and be open to | |
| revision. It's best to start strict and safe, then relax. | |
| | |
| So, no need for the drama. | |
| A4ET8a8uTh0_v2 wrote: | |
| I am of two minds of it having now seen both good coders | |
| augmented by AI and bad coders further diminished by it ( I | |
| would even argue its worse than stack overflow, because back | |
| then they would at least would have had to adjust code a | |
| little bit ). | |
| | |
| I am personally somewhere in the middle, just good enough to | |
| know I am really bad at this so I make sure that I don't | |
| contribute to anything that is actually important ( like QEMU | |
| ). | |
| | |
| But how many people recognize their own strengths and | |
| weaknesses? That is part of the problem and now we are | |
| proposing that even that modicum of self-regulation ( as | |
| flawed as it is ) be removed. | |
| | |
| FWIW, I hear you. I also don't have an answer. Just thinking | |
| out loud. | |
| rapind wrote: | |
| > If a project successfully excludes AI contributions (not | |
| clear how other than controlling contributions to a tight | |
| group of anti-AI fanatics), it's just going to be cloned, and | |
| the clones will leave it in the dust. | |
| | |
| Yeah I don't think so. But if it does then who cares? AI can | |
| just make a better QEMU at that point I guess. | |
| | |
| They aren't hurting anyone with this stance (except the AI | |
| hype lords), which I'm pretty sure isn't actually an anti-AI | |
| stance, but a pragmatic response to AI slop in its current | |
| state. | |
| basilgohar wrote: | |
| I feel like this is mostly proofless assertion. I'm aware | |
| what you hint at is happening, but the conclusions you arrive | |
| at are far from proven or even reasonable at this stage. | |
| | |
| For what it's worth, I think AI for code will arrive at a | |
| place like how other coding tools sit - hinting, | |
| intellisense, linting, maybe even static or dynamic analysis, | |
| but I doubt NOT using AI will be a critical asset to | |
| productivity. | |
| | |
| Someone else in the thread already mentioned it's a bit of an | |
| amplifier. If you're good, it can make you better, but if | |
| you're bad it just spreads your poor skills like a robot | |
| vacuum spreads animal waste. | |
| galangalalgol wrote: | |
| I think that was his point, the project full of bad | |
| developers isn't the competition. It is a peer whose skill | |
| matches yours and uses agents on top of that. By myself I | |
| am no match for myself + cline. | |
| Retric wrote: | |
| That's true in the short term. Longer term it's | |
| questionable as using AI tools heavily means you don't | |
| remember all the details creating a new form of technical | |
| debt. | |
| linsomniac wrote: | |
| Dude, have you ever looked at code you wrote 6 months ago | |
| and gone "What was the developer thinking?" ;-) | |
| ringeryless wrote: | |
| yes, constantly. I also don't remember much contextual | |
| domain info of a given section of code about 2 weeks into | |
| delving into some other part of the same app. | |
| | |
| So-called AI makes this worse. | |
| | |
| Let me remind you of gyms, now that humans have been | |
| saved of much manual activity... | |
| linsomniac wrote: | |
| >So-called AI makes this worse. | |
| | |
| The AI tooling is also really, really good at being able | |
| to piece together the code, the contextual domain, the | |
| documentation, the tests, the related issues/tickets, it | |
| could even take the change history into account, and be | |
| able to help refresh your memory of unfamiliar code in | |
| the context of bugs or new changes you are looking at | |
| making. | |
| | |
| Whether or not you go to the gym, you are probably going | |
| to want to use an excavator if you are going to dig a | |
| basement. | |
| Dylan16807 wrote: | |
| > So-called AI makes this worse. | |
| | |
| I think that needs actual testing. At what time distances | |
| is there an effect, and how big is it? Even if there is | |
| an effect, it could be small enough that a mild | |
| productivity boost from AI is more important. | |
| brulard wrote: | |
| Exactly. Claude code can explain code I've written to me | |
| better than I could. I feel like people who don't see AI | |
| as a transformative element to programming probably | |
| didn't experience what it can do today as opposed to 6 | |
| months or a year ago. It's night and day difference. And | |
| it still was useful back then | |
| galangalalgol wrote: | |
| Yeah 6 months ago Claude could make me a rust function | |
| that wouldn't compile but got me pointed in the right | |
| direction. Now it will make it correct with comments and | |
| unit tests with idiomatic style just using chat. But we | |
| don't have to use chat. Even open models today like | |
| devstral when combined with an agent can run cargo check | |
| and clippy and self prompt (with rusts great error | |
| messages) to fix everything. Prompting it with some unit | |
| test cases lets it iterate until those pass too. Software | |
| development has fundamentally changed. I still would | |
| advise developers who care about performance to be able | |
| to read asm. But just like I wouldn't write asm anymore, | |
| because the llvm optimiser is really good, we are going | |
| to get to a point where designing the test cases will be | |
| the same as developing the software. | |
| CamperBob2 wrote: | |
| I don't need to remember much, really. I have tools for | |
| that. | |
| | |
| Really, _really_ good tools. | |
| otabdeveloper4 wrote: | |
| IMO LLMs are best when used as locally-run offline search | |
| engines. This is a clear and obvious disruptive technology. | |
| | |
| But we will need to get a lot better at finetuning first. | |
| People don't want generalist LLMs, they want "expert | |
| systems". | |
| danielbln wrote: | |
| Speak for yourself, I prefer generalist LLMs. Also, the | |
| bitter lesson of ML applies. | |
| XorNot wrote: | |
| A reasonable conclusion about this would simply be that the | |
| developers are saying "we're not merging anything which _you_ | |
| can 't explain". | |
| | |
| Which is entirely reasonable. The trend of people say, on HN | |
| saying "I asked an LLM and this is what it said..." is | |
| infuriating. | |
| | |
| It's just an upfront declaration that if your answer to | |
| something is "it's what Claude thinks" then it's not getting | |
| merged. | |
| Filligree wrote: | |
| That's not what the policy says, however. You could be the | |
| world's most honest person, using Claude only to generate | |
| code you described to it in detail and fully understand, | |
| and would still be forbidden. | |
| heavyset_go wrote: | |
| Regarding #1, at least in the mainframe/cloud model of hosted | |
| LLMs, the operators have a history of model prompts and | |
| outputs. | |
| | |
| For example, if using Copilot, Microsoft also has every | |
| commit ever made if the project is on GitHub. | |
| | |
| They could, theoretically, determine what did or didn't come | |
| out of their models and was integrated into source trees. | |
| | |
| Regarding #2 and #3, with relatively novel software like QEMU | |
| that models platforms that other open source software | |
| doesn't, LLMs might not be a good fit for contributions. | |
| Especially where emulation and hardware accuracy, timing, | |
| quirks, errata etc matter. | |
| | |
| For example, modeling a new architecture or emulating new | |
| hardware might have LLMs generating convincing looking | |
| nonsense. Similarly, integrating them with newly added and | |
| changing APIs like in kvm might be a poor choice for LLM use. | |
| safety1st wrote: | |
| It seems to me that the point in your first paragraph argues | |
| against your points #2 and #3. | |
| | |
| If a project allows AI generated contributions, there's a | |
| risk that they'll be flooded with low quality contributions | |
| that consume human time and resources to review, thus | |
| paralyzing the project - it'd be like if you tried to read | |
| and reply to every spam email you receive. | |
| | |
| So the argument goes that #2 and #3 will not materialize, | |
| blanket acceptance of AI contributions will not help projects | |
| become more competitive, it will actually slow them down. | |
| | |
| Personally I happen to believe that reality will converge | |
| somewhere in the middle, you can have a policy which says | |
| among other things "be measured in your usage of AI," you can | |
| put the emphasis on having contributors do other things like | |
| pass unit tests, and if someone gets spammy you can ban them. | |
| So I don't think AI is going to paralyze projects but I also | |
| think its role in effective software development is a bit | |
| narrower than a lot of people currently believe... | |
| devmor wrote: | |
| None of your claims here are based in factual assertion. | |
| These are unproven, wishful fantasies that may or may not be | |
| eventually true. | |
| | |
| No one should be evaluating or writing policy based on | |
| fantasy. | |
| brabel wrote: | |
| Are you familiar with the futures market? It's all about | |
| what you call fantasy ! Similarly, if you are determining | |
| the strategy of your organization, all you have to help you | |
| is "fantasy". By the time evidence exists in sufficient | |
| quantity your lunch has already been eaten long ago. A good | |
| CEO is one that can see where the market is going before | |
| anyone else. You may be right that AI is just a fad , but | |
| given how much the big companies and all the major startups | |
| in the last few years are investing on it, it's | |
| overwhelmingly a fringe position to have at this point. | |
| devmor wrote: | |
| Both the futures market and resource planning are based | |
| on evidential standards (usually). When you make those | |
| decisions without any reasoning, you are _gambling_ , and | |
| might as well go to the casino. | |
| | |
| But notably, FOSS development is neither a corporation or | |
| stock trading. It is focused on longevity and | |
| maintainability. | |
| otabdeveloper4 wrote: | |
| > Software projects that somehow are 100% human developed | |
| will not be competitive with AI assisted or written projects. | |
| | |
| There is zero evidence so far that AI improves software | |
| developer efficiency. | |
| | |
| No, just because you had fun vibing with a chatbot doesn't | |
| mean you delivered the end product faster. All of the | |
| supposed AI software development gains are entirely self- | |
| reported based on "vibes". (Remember these are the same | |
| people who claimed massive developer efficiency gains from | |
| programming in Haskell or Lisp a few years back.) | |
| | |
| Note I'm not even touching on the tech debt issue here, but | |
| it is also important. | |
| | |
| P.S. The hallucination and counting to five problems will | |
| never go away. They are intrinsic to the LLM approach. | |
| gadders wrote: | |
| I am guessing they don't need people to prove that | |
| contributions didn't contain AI code, they just need the | |
| contributor to say they didn't use any AI code. That way, if | |
| any AI code is found in their contribution the liability lies | |
| with the contributor (but IANAL). | |
| graemep wrote: | |
| AFAIK in most places it might help with the amount of | |
| damages, but does not let you off the hook. | |
| conartist6 wrote: | |
| #2 is a complete and total fallacy, trivially disprovable. | |
| | |
| Overall velocity doesn't come from writing a lot more code, | |
| or even from writing code especially quickly. | |
| kylereeve wrote: | |
| > #2 Software projects that somehow are 100% human developed | |
| will not be competitive with AI assisted or written projects. | |
| The only room for debate on that is an apocalypse level | |
| scenario where humans fail to continue producing | |
| semiconductors or electricity. | |
| | |
| ?? | |
| | |
| "AI" code generators are still mostly overhyped nonsense that | |
| generate incorrect code all the time. | |
| furyofantares wrote: | |
| Much of that may be true in the (near) future but it also | |
| makes sense for people to make decisions that apply right | |
| now, and update as the future comes along. | |
| koolala wrote: | |
| This is a win for MIT license though. | |
| graemep wrote: | |
| From what point of view? | |
| | |
| For someone using MIT licensed code for training, it still | |
| requires a copy of the license and the copyright notice in | |
| "copies or substantial portions of the software". SO I guess | |
| its fine for a snippet, but if the AI reproduces too much of | |
| it, then its in breach. | |
| | |
| From the point of view of someone who does not want their | |
| code used by an LLM then using GPL code is more likely to be | |
| a breach. | |
| Thorrez wrote: | |
| Is there any likelihood that the output of the model would be | |
| public domain? Even if the model itself is public domain, the | |
| prompt was created by a human and impacted the output, so I | |
| don't see how the output could be public domain. And then after | |
| that, the output was hopefully reviewed by the original | |
| prompting human and likely reviewed by another human during | |
| code review, leading to more human impact on the final code. | |
| AndrewDucker wrote: | |
| There is no copyright in AI art. Presumably the same | |
| reasoning would apply to AI code: | |
| https://iclg.com/news/22400-us-court-confirms-ai- | |
| generated-a... | |
| lars_francke wrote: | |
| This particular case is US only. | |
| | |
| The rest of the world might decide differently. | |
| AndrewDucker wrote: | |
| Absolutely. | |
| | |
| And as long as you're not worried about people in the USA | |
| reusing your code then you're all good! | |
| graemep wrote: | |
| Proprietary source code would not usually end up training LLMs. | |
| Unless its leaked, how would an LLM have access to it? | |
| | |
| > it would require speculative copyright owners to disassemble | |
| their binaries | |
| | |
| I wonder whether AI might be a useful tool for making that | |
| easier. | |
| | |
| If you have evidence then you can get courts to order | |
| disclosure or examination of code. | |
| | |
| > And plenty of proprietary software has public domain code in | |
| it already. | |
| | |
| I am pretty sure there is a significant amount of proprietary | |
| code that has FOSS code in it, against license terms | |
| (especially GPL and similar). | |
| | |
| A lot of proprietary code is now been written using AIs trained | |
| on FOSS code, and companies are open about this. It might open | |
| an interesting can of worms. | |
| physicsguy wrote: | |
| > Unless its leaked | |
| | |
| Given the number of people on HN that say they're using for | |
| e.g. Cursor, OpenAI, etc. through work, and my experience | |
| with workplaces saying 'absolutely you can't use it', I | |
| suspect a large amount is being leaked. | |
| graemep wrote: | |
| I thought most of these did not use users context and input | |
| for training? | |
| pmlnr wrote: | |
| Licence incompatibility is enough. | |
| strogonoff wrote: | |
| People sometimes miss that copyleft is powered by copyright. | |
| Copyleft (which means Linux, Blender, and plenty of other | |
| goodness) needs the ability to impose some rules on what users | |
| do with your work, presumably in the interest of common good. | |
| Such ability implies IP ownership. | |
| | |
| This does not mean that powerful interests abusing copyright | |
| with ever increasing terms and enforcement overreach is fair | |
| game. It harms common interest. | |
| | |
| However, it _does_ mean that abusing copyright from the other | |
| side and denouncing the core ideas of IP ownership--which is | |
| now sort of in the interest of certain companies (and capital | |
| heavily invested in certain fashionable but not yet profitable | |
| startups) based around IP expropriation--harms common interest | |
| just as well. | |
| ben_w wrote: | |
| While this is a generally true statement (and has echoes in | |
| other areas like sovereign citizens), GenAI may make | |
| copyright (and copyleft) economically redundant. | |
| | |
| While the AI we have now is not good enough to make an entire | |
| operating system when asked*, if/when they can, the benefits | |
| of all the current licensing models evaporate, and it doesn't | |
| matter if that model is proprietary with no source, or GPL, | |
| or MIT, because by that point anyone else can reproduce your | |
| OS for whatever the cost of tokens is without ever touching | |
| your code. | |
| | |
| But as we're not there yet, I agree with @benlivengood that | |
| (most**) OSS projects must treat GenAI code as if it's | |
| unusable. | |
| | |
| * At least, not a modern OS. I've not tried getting any model | |
| to output a tiny OS that would fit in a C64, and while I | |
| doubt they can currently do this, it is a bet I might lose, | |
| whereas I am confident all models would currently fail at | |
| e.g. reproducing Windows XP. | |
| | |
| ** I think MIT licensed projects can probably use GenAI code, | |
| they're not trying to require derivatives to follow the same | |
| licence, but I'm not a lawyer and this is just my barely | |
| informed opinion from reading the licenses. | |
| strogonoff wrote: | |
| I have a few sociophilosophical quibbles about the impact | |
| of this, but to focus on a practical part: | |
| | |
| > by that point anyone else can reproduce your OS for | |
| whatever the cost of tokens is without ever touching your | |
| code. | |
| | |
| Do you think that the cost of tokens will remain low enough | |
| once these companies for now operating at loss have to be | |
| profitable, and it really is going to be "anyone else"? Or, | |
| would it be limited to "big tech" or select few | |
| corporations who can pay a non-trivial amount of money to | |
| them? | |
| | |
| Do you think it would mean they essentially sell GPL'ed | |
| code for proprietary use? Would it not affect FOSS, which | |
| has been till now partially powered by the promise to | |
| contributors that their (often voluntary) work would remain | |
| for public benefit? | |
| | |
| Do you think someone would create and make public (and | |
| gather so much contributor effort) something on the scale | |
| Linux, if they knew that it would be open to be scraped by | |
| an intermediary who can sell it at whatever price they | |
| choose to set to companies that then are free to call it | |
| their own and repackage commercially without contributing | |
| back, providing _their_ source or crediting the original | |
| authors in any way? | |
| Pet_Ant wrote: | |
| > Do you think that the cost of tokens will remain low | |
| enough once these companies for now operating at loss | |
| have to be profitable | |
| | |
| New techniques are coming, new hardware processes are | |
| being developed, and the incremental unit cost is low. | |
| Once they fill up the labs, they'll start selling to | |
| consumers till the price becomes the cost of a bucket of | |
| sand and the cost to power a light-bulb. | |
| ben_w wrote: | |
| > Do you think that the cost of tokens will remain low | |
| enough once these companies for now operating at loss | |
| have to be profitable, and it really is going to be | |
| "anyone else"? Or, would it be limited to "big tech" or | |
| select few corporations who can pay a non-trivial amount | |
| of money to them? | |
| | |
| When considering current models, it's not in their power | |
| to prevent it: | |
| | |
| DeepSeek demonstrated big models could be trained very | |
| easily for a modest budget, and inference is mostly | |
| constrained by memory access rather than compute, so if | |
| we had smartphones with a terabyte of RAM with a very | |
| high bandwidth to something like a current generation | |
| Apple NPU, things like DeepSeek R1 would run locally at | |
| (back-of-the-envelope calculation) about real-time -- and | |
| drain the battery in half an hour if you used that model | |
| continuously. | |
| | |
| But current models are not good enough, so the real | |
| question is: "who will hold what power when such models | |
| hypothetically are created?", and I have absolutely no | |
| idea. | |
| | |
| > Do you think someone would create and make public (and | |
| gather so much contributor effort) something on the scale | |
| Linux, if they knew that it would be open to be scraped | |
| by an intermediary who can sell it at whatever price they | |
| choose to set to companies that then are free to call it | |
| their own and repackage commercially without contributing | |
| back, providing their source or crediting the original | |
| authors in any way? | |
| | |
| Consider it differently: how much would it cost to use an | |
| LLM to reproduce all of Linux? | |
| | |
| I previously rough-estimated that at $230/megatoken of | |
| (useful final product) output, an AI would be energy- | |
| competitive vs. humans consuming calories to live: | |
| https://news.ycombinator.com/item?id=44304186 | |
| | |
| As I don't have specifics, I need to Fermi-estimate this: | |
| | |
| I'm not actually sure how big any OS (with or without | |
| apps) is, but I hear a lot of numbers in the range of | |
| 10-50 million. Let's say 50 Mloc. | |
| | |
| I don't know the tokens per line, I'm going to guess 10. | |
| | |
| 50e6 lines * 10 tokens/line * $230/(1e6 tokens) = | |
| $115,000 | |
| | |
| There's no fundamental reason for $230/megatoken beyond | |
| that's when the AI is economically preferable to feeding | |
| a human who is doing it for free and you just need to | |
| stop them from starving to death, even if you have | |
| figured out how to directly metabolise electricity which | |
| is much cheaper than food: on the one hand $230, this is | |
| on the very expensive end of current models; on the | |
| second hand, see previous point about running DeepSeek R1 | |
| on phone processor with more RAM and bandwidth to match; | |
| on the third hand*, see other previous point that current | |
| models just aren't good enough to bother. | |
| | |
| So it's current not available at any price, but when the | |
| quality is good, even charging a rate that's currently | |
| expensive makes all humans unemployable. | |
| | |
| * Insert your own joke about about off-by-one-errors | |
| olalonde wrote: | |
| Seems like a fake problem. Who would sue QEMU for using AI- | |
| generated code? OpenAI? Anthropic? | |
| ethbr1 wrote: | |
| Anyone whose code is in a used model's training set.* | |
| | |
| This is about future existential tail risk, not current risk. | |
| | |
| * Depending on future court decisions in different | |
| jurisdictions | |
| olalonde wrote: | |
| Again, seems so implausible that it's not worth worrying | |
| about. | |
| ethbr1 wrote: | |
| Were you around for SCO? https://en.m.wikipedia.org/wiki/ | |
| Timeline_of_SCO%E2%80%93Linu... | |
| | |
| IP disputes aren't trivial, especially for shoestring- | |
| funded OSS. | |
| consp wrote: | |
| It is implausible until it isn't and qemu is taking a | |
| very cheap and easy step to outright ban it and covering | |
| their ass just in case. The threat is low plausibility | |
| but high risk and thus a valid one to consider. | |
| olalonde wrote: | |
| I disagree. Open source projects routinely deal with far | |
| greater risk, like employees contributing open source | |
| code on company time without explicit authorization. Yet | |
| they generally allow code from anyone without much | |
| verification (some have a contributor agreement but it's | |
| based on trust, there's no actual verification). I stand | |
| by my 2022 prediction[0]: no one will get sued for using | |
| LLM-generated code. | |
| | |
| [0] https://news.ycombinator.com/item?id=31849027 | |
| stronglikedan wrote: | |
| To me, AI doesn't generate code by itself, so there's no | |
| difference between the outputted code or code written by the | |
| human that prompted it. As well, the humans that prompt it are | |
| solely responsible for making sure it is correct, and solely to | |
| blame for any negative outcomes of its use, just as if they had | |
| written it themselves. | |
| hughw wrote: | |
| I'd hope there could be some distinction between using LLM as a | |
| super autocomplete in your IDE, vs giving it high-level | |
| guidelines and making it generate substantive code. It's a gray | |
| area, sure, but if I made a contribution I'd want to be able to | |
| use the labor-saving feature of Copilot, say, without danger of | |
| it copying an algorithm from open source code. For example, today | |
| I generated a series of case statements and Copilot detected the | |
| pattern and saved me tons of typing. | |
| dheera wrote: | |
| That and also just AI glasses that become an extension of my | |
| mind and body, just giving me clues and guidance on everything | |
| I do including what's on my screen. | |
| | |
| I see those glasses as becoming just a part of me, just like my | |
| current dumb glasses are a part of me that enables me to see | |
| better, the smart glasses will help me to see AND think better. | |
| | |
| My brain was trained on a lot of proprietary code as well, the | |
| copyright issues around AI models are pointless western NIMBY | |
| thinking and will lead to the downfall of western civilization | |
| if they keep pursuing legal what-ifs as an excuse to reject | |
| awesome technology. | |
| mattl wrote: | |
| I'm interested to see how this plays out. I'd like a similar | |
| policy for my projects, but also a similar policy/T&C that | |
| prohibits the crawling of the content too. | |
| candiddevmike wrote: | |
| Only way to prohibit crawling is to go back to invite only, | |
| probably self-hosted repositories. These companies have no | |
| shame, your T&Cs won't mean anything to them and you have no | |
| way of proving they violated them without some kind of | |
| discovery into their training data. | |
| acedTrex wrote: | |
| Oh hey, the thing I predicted in my blog titled "yes i will judge | |
| you for using AI" happened lol | |
| | |
| Basically I think open source has traditionally HEAVILY relied on | |
| hidden competency markers to judge the quality of incoming | |
| contributions. LLMs throw that entire concept on its head by | |
| presenting code that has competent markers but none of the | |
| backing experience. It is a very very jarring experience for | |
| experienced individuals. | |
| | |
| I suspect that virtual or in person meetings and other forms of | |
| social proof independent of the actual PR will become far more | |
| crucial for making inroads in large projects in the future. | |
| SchemaLoad wrote: | |
| I've started seeing this at work with coworkers using LLMs to | |
| generate code reviews. They submit comments which are way above | |
| their skill level which almost trick you in to thinking they | |
| are correct since only a very skilled developer would make | |
| these suggestions. And then ultimately you end up wasting tons | |
| of time proving how these suggestions are wrong. Spending far | |
| more time than the person pasting the suggestions spent to | |
| generate them. | |
| acedTrex wrote: | |
| Yep 100%, it is something I have also observed. Frankly has | |
| been frustrating to the point I spun up a quick one off html | |
| site to rant/get my thoughts out. | |
| https://jaysthoughts.com/aithoughts1 | |
| whatevertrevor wrote: | |
| Just some feedback: your site is hard to read on mobile | |
| devices because of the sidebar. | |
| acedTrex wrote: | |
| Thank you, I'll get that fixed. | |
| | |
| Edit: Mobile should be fixed now | |
| diabllicseagull wrote: | |
| funny enough I had coworkers who similarly had a hold of the | |
| jargon but without any substance. They would always turn out | |
| to be time sinks for others doing the useful work. AI | |
| imitating that type of drag on the workplace is kinda funny | |
| ngl. | |
| heisenbit wrote: | |
| Probabilistic patterns stringed together are something | |
| different from an end-to-end intention driven solidly | |
| linked chain of thought that is with pylons grounded in | |
| relevant context at critical points. | |
| Groxx wrote: | |
| By far the largest review-effort PRs of my career have been | |
| in the past year, due to mid-sized LLM-built features. | |
| Multiple rounds of other signoffs saying "lgtm" with only | |
| minor style comments only for me to finally read it and see | |
| that no, it is not even _remotely_ acceptable and we have | |
| several uses _built by the same team_ that would fail | |
| immediately if it was merged, to say nothing of the thousands | |
| of other users that might also be affected. Stuff the | |
| reviewers have experience with and didn 't think about | |
| because they got stuck in the "looks plausible" rut, rather | |
| than "is correct". | |
| | |
| So it goes back for changes. It returns the next day with | |
| complete rewrites of large chunks. More "lgtm" from others. | |
| More incredibly obvious flaws, race conditions, the works. | |
| | |
| And then round three repeats mistakes that came up in round | |
| one, because LLMs don't learn. | |
| | |
| This is not a future style of work that I look forward to | |
| participating in. | |
| tobyhinloopen wrote: | |
| I think a future with LLM coding requires much more tests, | |
| both testing happy and bad flows. | |
| danielbln wrote: | |
| It also needs proper guideline enforcement. If an | |
| engineer produces poorly tested and unreviewed code, then | |
| the buck stops with them. This is a human problem more | |
| than it is a tool problem. | |
| zelphirkalt wrote: | |
| I think the issue is with people taking mental shortcuts | |
| and thus no longer properly thinking about design | |
| decisions and the bigger picture in terms of concepts of | |
| the software. | |
| beej71 wrote: | |
| I'm not really in the field any longer, but one of my | |
| favorite things to do with LLMs is ask for code reviews. I | |
| usually end up learning something new. And a good 30-50% of | |
| the suggestions are useful. Which actually isn't skillful | |
| enough to give it a title of "code reviewer", so I certainly | |
| wouldn't foist the suggestions on someone else. | |
| mrheosuper wrote: | |
| People keep telling LLM will improve efficiency, but your | |
| comment has proved it's the otherwise. | |
| | |
| It look like LLM is not good for cooperation, because the | |
| nature of LLM is randomness. | |
| itsmekali321 wrote: | |
| send your blog link please | |
| acedTrex wrote: | |
| https://jaysthoughts.com/aithoughts1 Bit of a rambly rant, | |
| but the prediction stuff I was tongue in cheek referring to | |
| above is at the bottom. | |
| mattmanser wrote: | |
| Looks like your blog post got submitted here and then I | |
| assume triggered the flame war flag. A lot of people just | |
| reading the title and knee jerking in the comments: | |
| | |
| https://news.ycombinator.com/item?id=44384610 | |
| | |
| Funny, as the entire thing starts off with "Now, full | |
| disclosure, the title is a bit tongue-in-cheek.". | |
| acedTrex wrote: | |
| I suppose I did bring that on myself with the title | |
| didn't I. I believe I have fixed the site for mobile so | |
| hopefully some of those thread complaints have been | |
| rectified. | |
| stevage wrote: | |
| > Basically I think open source has traditionally HEAVILY | |
| relied on hidden competency markers to judge the quality of | |
| incoming contributions. | |
| | |
| Yep, and it's not just code. Student essays, funding | |
| applications, internal reports, fiction, art...everything that | |
| AI touches has this problem that AI outputs look superficially | |
| similar to the work of experts. | |
| whatevertrevor wrote: | |
| I have learned over time that the actually smart people worth | |
| listening to, avoid jargon beyond what is strictly necessary, | |
| talk in simple terms with specific goals/improvements/changes | |
| in mind. | |
| | |
| If I'm having to reread something over and over to understand | |
| what they're even trying to accomplish, odds are it's either | |
| AI generated or an attempt at sounding smart instead of being | |
| constructive. | |
| danielbln wrote: | |
| Trajectory so far has been that AI outputs are converging | |
| increasingly not just in superficial similarity but also | |
| quality of expert output. We are obviously not there yet, and | |
| some might say we never will. But if we do, there is a whole | |
| new conversation to be had. | |
| zelphirkalt wrote: | |
| I suspect that there are at least 1 or 2 more significant | |
| discoveries in terms of architecture and general way of | |
| models working, before these things become actual experts. | |
| Maybe they will never get there and we will discover how to | |
| better incorporate facts and reasoning, rather than just | |
| ingesting billions of training data points. | |
| BurningFrog wrote: | |
| Would it make sense to include the complete prompt that generated | |
| the code with the code? | |
| astrobiased wrote: | |
| It would need to be more than that. A prompt for one model can | |
| have different results vs another. Even when the model has | |
| different treatment for inference, eg quantization, the same | |
| prompt for the unquantized and quantized model could differ. | |
| verdverm wrote: | |
| Even more so, when you come back to understand in a few | |
| years, the model will no longer be available | |
| galangalalgol wrote: | |
| One of several reasons to use an open model even if it | |
| isn't quite as good. Version control the models and commit | |
| the prompts with the model name and a hash of the | |
| parameters. I'm not really sure what value that | |
| reproducibility adds though. | |
| catlifeonmars wrote: | |
| You'd need to hash the model weights and save the seeds for the | |
| temperature prng as well, in order to verify the provenance. | |
| Ideally it would be reproducible, right? | |
| danielbln wrote: | |
| Maybe 2 years ago. Nowadays LLMs call functions and use | |
| tools, good luck capturing that in a way that it's | |
| reproducible. | |
| ethan_smith wrote: | |
| Including prompts would create transparency but still wouldn't | |
| resolve the underlying copyright uncertainty of the output or | |
| guarantee the code wasn't trained on incompatibly-licensed | |
| material. | |
| Aeolun wrote: | |
| This seems absolutely impossible to enforce. All my editors give | |
| me AI assisted code hints. Zed, cursor, VS code. All of them now | |
| show me autocomplete that comes from an LLM. There's absolutely | |
| no distinction between that code, and code that I've typed out | |
| myself. | |
| | |
| It's like complaining that I may have no legal right to submit my | |
| stick figure because I potentially copied it from the drawing of | |
| another stick figure. | |
| | |
| I'm firmly convinced that these policies are only written to have | |
| plausible deniability when stuff with generated code gets | |
| inevitably submitted anyway. There's no way the people that write | |
| these things aren't aware they're completely unenforceable. | |
| luispauloml wrote: | |
| > I'm firmly convinced that these policies are only written to | |
| have plausible deniability when stuff with generated code gets | |
| inevitably submitted anyway. | |
| | |
| Of course it is. And nobody said otherwise, because that is | |
| explicitly stated on the commit message: | |
| [...] More broadly there is, as yet, no broad consensus | |
| on the licensing implications of code generators | |
| trained on inputs under a wide variety of licenses | |
| | |
| And in the patch itself: [...] With AI | |
| content generators, the copyright and license status of the | |
| output is ill-defined with no generally accepted, | |
| settled legal foundation. | |
| | |
| What other commenters pointed out is that, beyond the legal | |
| issue, other problems also arise form the use of AI-generated | |
| code. | |
| teeray wrote: | |
| It's like the seemingly-confusing gates passing through | |
| customs that say "nothing to declare" when you've already | |
| made your declarations. Walking through that gate is a | |
| conscious act that places culpability on you, so you can't | |
| simply say "oh, I forgot" or something. | |
| | |
| The thinking here is probably similar: if AI-generated code | |
| becomes poisonous and is detected in a project, the DCO could | |
| allow shedding liability onto the contributor that said it | |
| wasn't AI-generated. | |
| Filligree wrote: | |
| > Of course it is. And nobody said otherwise, because that is | |
| explicitly stated on the commit message | |
| | |
| Don't be ridiculous. The majority of people are in fact | |
| honest, and won't submit such code; the major effect of the | |
| policy is to prevent those contributions. | |
| | |
| Then you get plausible deniability for code submitted by | |
| villains, sure, but I'd like to hope that's rare. | |
| raincole wrote: | |
| I think most people don't make money by submitting code to | |
| QEMU, so there isn't that much incentive to cheat. | |
| shmerl wrote: | |
| Neovim doesn't force you to use AI, unless you configure it | |
| yourself. If your editor doesn't allow you to switch it off, | |
| there must be a big problem with it. | |
| sysmax wrote: | |
| I wish people would make distinction regarding the size/scope of | |
| the AI-generated parts. Like with video copyright laws, where a | |
| 5-second clip from a copyrighted movie is usually considered fair | |
| use and not frowned upon. | |
| | |
| Because for projects like QEMU, current AI models can actually do | |
| mind-boggling stuff. You can give it a PDF describing an | |
| instruction set, and it will generate you wrapper classes for | |
| emulating particular instructions. Then you can give it one class | |
| like this and a few paragraphs from the datasheet, and it will | |
| spit out unit tests checking that your class works as the CPU | |
| vendor describes. | |
| | |
| Like, you can get from 0% to 100% test coverage several orders of | |
| magnitude faster than doing it by hand. Or refactoring, where you | |
| want to add support for a particular memory virtualization trick, | |
| and you need to update 100 instruction classes based on straight- | |
| forward, but not 100% formal rule. A human developer would be | |
| pulling their hairs out, while an LLM will do it faster than you | |
| can get a coffee. | |
| echelon wrote: | |
| Qemu can make the choice to stay in the "stone age" if they | |
| want. Contributors who prefer AI assistance can spend their | |
| time elsewhere. | |
| | |
| It might actually be prudent for some (perhaps many | |
| foundational) OSS projects to reject AI until the full legal | |
| case law precedent has been established. If they begin taking | |
| contributions and we find out later that courts find this is in | |
| violation of some third party's copyright (as shocking as that | |
| outcome may seem), that puts these projects in jeopardy. And | |
| they certainly do not have the funding or bandwidth to avoid | |
| litigation. Or to handle a complete rollback to pre-AI | |
| background states. | |
| 762236 wrote: | |
| It sounds like you're saying someone could rewrite Qemu on | |
| their own, with the help of AI. That would be pretty funny. | |
| mrheosuper wrote: | |
| Given enough time, a monkey randomly types on typewriter can | |
| rewrite QEMU. | |
| halostatue wrote: | |
| Not all jurisdictions are the US, and not all jurisdictions | |
| allow fair use, but instead have specific fair dealing laws. | |
| Not all jurisdictions have fair dealing laws, meaning that | |
| _every_ use has to be cleared. | |
| | |
| There are simple algorithms that everyone will implement the | |
| same way down to the variable names, but aside from those | |
| fairly rare exceptions, there's no "maximum number of lines" | |
| metric to describe how much code is "fair use" regardless of | |
| the licence of the code "fair use"d in your scenario. | |
| | |
| Depending on the context, even in the US that 5-second clip | |
| _would not pass fair use doctrine muster_. If I made a new film | |
| cut _entirely_ from five second clips of different movies and | |
| tried a fair use doctrine defence, I would likely never see the | |
| outside of a courtroom for the rest of my life. If I tried to | |
| do so with licensing, I would probably pay more than it cost to | |
| make all those movies. | |
| | |
| Look up the decisions over the last two decades over sampling | |
| (there are albums from the late 80s and 90s -- when sampling | |
| was relatively new -- which will never see another pressing or | |
| release because of these decisions). The musicians and | |
| producers who chose the samples thought they would be covered | |
| by fair use. | |
| naveed125 wrote: | |
| Coolest thing I've seen today. | |
| pretoriusdre wrote: | |
| AI generated code is generally pretty good and incredibly fast. | |
| | |
| Seeing this new phenomenon must be difficult for those people who | |
| have spent a long time perfecting their craft. Essentially, they | |
| might feel that their skillsets are being undermined. It would be | |
| especially hard for people who associate a lot of their self- | |
| identity with their job. | |
| | |
| Being a purist is noble, but I think that this stance is foolish. | |
| Essentially, people who chose not to use AI code tools will be | |
| overtaken by the people who do. That's the unfortunate reality. | |
| loktarogar wrote: | |
| It's not a stance about the merits of AI generated code but | |
| about the legal status of it, in terms of who owns it and | |
| related concepts. | |
| pretoriusdre wrote: | |
| Yes the reasoning behind the decision is clear and as you | |
| described. But I would also make the point that the decision | |
| also comes with certain consequences, to which a discussion | |
| about merits is directly relevant. | |
| loktarogar wrote: | |
| > Essentially, people who chose not to use AI code tools | |
| will be overtaken by the people who do. That's the | |
| unfortunate reality. | |
| | |
| Who is going to "overtake" QEMU, what exactly does that | |
| mean, and what will it matter if they are? | |
| danielbln wrote: | |
| OP said people. QEMU is not people. | |
| loktarogar wrote: | |
| We're talking about a decision that the people behind | |
| QEMU made that affects people, to which the consequences | |
| of made the discussion of merits "directly relevant". | |
| | |
| If we're talking about something that neither involving | |
| QEMU nor the people behind it, where is the relevance? | |
| It's just a rant on AI at that point. | |
| N1H1L wrote: | |
| I use LLMs for generating documentation- I write my code, and ask | |
| Claude to write my documentation | |
| auggierose wrote: | |
| I think you are doing it the wrong way around. | |
| insane_dreamer wrote: | |
| Maybe not. I trust Claude to write docs. I don't trust it to | |
| write my code the way I want. | |
| jssjsnj wrote: | |
| Oi | |
| abhisek wrote: | |
| > It's best to start strict and safe, then relax. | |
| | |
| Makes total sense. | |
| | |
| I am just wondering how do we differentiate between AI generated | |
| code and human written code that is influenced or copied from | |
| some unknown source. The same licensing problem may happen with | |
| human code as well especially for OSS where anyone can | |
| contribute. | |
| | |
| Given the current usage, I am not sure if AI generated code has | |
| an identity of its own. It's really a tool in the hand of a | |
| human. | |
| catlifeonmars wrote: | |
| > Given the current usage, I am not sure if AI generated code | |
| has an identity of its own. It's really a tool in the hand of a | |
| human. | |
| | |
| It's a power saw. A really powerful tool that can be dangerous | |
| if used improperly. In that sense the code generator can have | |
| more or less of a mind of its own depending on the wielder. | |
| | |
| Ok I think I've stretched the analogy to the breaking point... | |
| b0a04gl wrote: | |
| there's no audit trail for how most code gets shaped anyway we're | |
| teammate's intuition from a past outage a one-liner from some old | |
| jira ticket even the shape of a func pulled from habit none of | |
| that is reviewable but still it gets trusted lol | |
| | |
| ai moves faster than group consensus this ban won't slow down the | |
| tech it'll may make paradigms like qemu harder to enter harder to | |
| scale, harder to test thru properly | |
| | |
| so if we maintain code like this we gotta know the trade we're | |
| making we're preserving trust but limiting throughput maybe fine | |
| idk but don't confuse it as future proofing | |
| | |
| i kinda feel it does exposes trust in oss is social not | |
| epistemic. we accept complex things if we know who dropped it and | |
| we reject clean things if it smells synthetic | |
| | |
| so the real qn isn't > did we use ai? it's > can we even maintain | |
| this in 6mo? and if the answer's yes doesn't really matter who | |
| produced the code fr | |
| caleblloyd wrote: | |
| Signed by mostly people at RedHat, which is owned by IBM, which | |
| makes Watson, which beat humans in Jeopardy in 2011. | |
| | |
| > These are early days of AI-assisted software development. | |
| | |
| Are they? Or is this just IBM destroying another acquisition | |
| slowly. | |
| | |
| Meanwhile the Dotnet Runtime is fully embracing AI. Which people | |
| on the outside may laugh at but you have extremely talented | |
| engineers like Stephen Toub and David Fowler advocating for it. | |
| | |
| So enterprises: next time you have an IBM rep trying to sell you | |
| AI services, do yourself a favor and go to any other number of | |
| companies out there who are actually serious about helping you | |
| build for the future. | |
| | |
| And since I am a North Carolina native, here's to hoping IBM and | |
| RedHat get their stuff together. | |
| bgwalter wrote: | |
| It is interesting to read the pro-AI rant in the comments on the | |
| linked commit. The person who is threatening to use "AI" anyway | |
| has almost no contributions either in qemu or on GitHub in | |
| general. | |
| | |
| This is the target group for code generators. All talk but no | |
| projects. | |
| ludicrousdispla wrote: | |
| >> The tools will mature, and we can expect some to become safely | |
| usable in free software projects. | |
| | |
| It should be possible to build a useful AI code generator for a | |
| given programming language solely from the source code for the | |
| language itself. Doing so however would require some maturity. | |
| zoobab wrote: | |
| BigTech now control Qemu? | |
| | |
| "Signed-off-by: Daniel P. Berrange <[email protected]> | |
| Reviewed-by: Kevin Wolf <[email protected]> Reviewed-by: Stefan | |
| Hajnoczi <[email protected]> Reviewed-by: Alex Bennee | |
| <[email protected]> Signed-off-by: Markus Armbruster | |
| <[email protected]> Signed-off-by: Stefan Hajnoczi | |
| <[email protected]>" | |
| wlkr wrote: | |
| incomingpain wrote: | |
| Using AI code generators. I have been able to get the code base | |
| large enough that it was starting to make nonsense changes. | |
| | |
| However, my overall experience I have been thinking about how | |
| this is going to be a massive boon to open source. So many | |
| patches, so many new tools will be created to streamline getting | |
| new packages into repos. Everything can be tested. | |
| | |
| Open source is going to be epicly boosted now. | |
| | |
| QEMU deciding to sit out from this acceleration is crazy to me, | |
| but probably what is going to give Xen/Docker/Podman the lead. | |
| flerchin wrote: | |
| I suppose the practical effect will be that contributors who use | |
| AI will have to defend their code as if they did not. To me, this | |
| implies more ownership of the code and deep understanding of it. | |
| This exchange happens fairly often in PRs I'm involved with: | |
| | |
| "Why did you do this insane thing?" | |
| | |
| "IDK, claude suggested it and it works." | |
| UrineSqueegee wrote: | |
| if AI using books to train isn't copyright infringement then the | |
| outputted code isn't copyrighted material either | |
| tqwhite wrote: | |
| I don't blame them for worrying about it. The policy should not | |
| be to forbid it but make sure you don't leave artifacts because I | |
| guarantee, people are going to use a bot to write their code. | |
| Hell, in six months, I doubt you will be able to get a code | |
| editor that doesn't use AI for code completion at least. | |
| | |
| Also, AI coded programs will be copyrightable just like the old | |
| days. You think the big corps are going to both not use bot | |
| coding and give up ownership of their code? Fat chance. | |
| | |
| Remember the Micky Mouse copyright extension? If the courts | |
| aren't sensible, we will have one of those the next day. | |
| | |
| The old days ended very abruptly this time. | |
| randomNumber7 wrote: | |
| I mean for low level C code the current LLMs are not that helpful | |
| anyway. | |
| | |
| On the other hand I am 100% sure that every company that doesn't | |
| use LLMs will be out of business in 10 years. | |
| randomNumber7 wrote: | |
| I know a secret. You can read the code the AI generated for you | |
| and check if it is what you want to do. It is still faster than | |
| writing it yourself most of the time. | |
| JonChesterfield wrote: | |
| Like skimming through a maths textbook right? Way quicker than | |
| writing one, same reassuring sense of understanding. | |
| saurik wrote: | |
| As someone who once worked on a product that had to carefully | |
| walk the line of legality, I haven't found any mention in this | |
| discussion of what I imagine is a key problem for qemu, that | |
| doesn't face other projects: as an emulator, they are already | |
| under a lot of scrutiny for legality, and so they are going to | |
| need to be a lot more conservative than other random projects | |
| with respect to increasing their legal risk. | |
___________________________________________________________________ | |
(page generated 2025-06-27 05:01 UTC) |