Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Sounds like a good lawsuit (Score 1) 39

You are right, get legal advice, the cost can be passed on to them anyway.

AIUI, your costs can't (or couldn't) generally be passed on when using the small claims system. Has that changed? It's been a while since I went through the process, so it's possible that my information here is out of date.

Comment Re:Sounds like a good lawsuit (Score 3, Informative) 39

There is obviously a personal data angle here. There might also be a defamation angle if the system works as implied by TFS, since it appears that someone's reputation has been affected because someone else lied about them and this has demonstrably caused harm? If there was more than one relevant incident then there might also be a harassment angle.

Please be careful with that advice about requesting compensation in a Letter Before Action, though. There are fairly specific rules for what you can and can't claim under our system and just going in with claiming some arbitrary figure of a few thousand pounds in "compensation" for vague damages is far from guaranteed to get the result you're hoping for. If someone were serious about challenging this kind of behaviour, they might do better to consult with a real lawyer initially to understand what they might realistically achieve and what kinds of costs and risks would be involved.

Comment Re: Would anyone have noticed? (Score 0) 61

I own a tiny indie studio in Chicagoland and my peers own the some of the huge studios in Chicagoland.

Cinespace is dead right now. It has ONE show active. The other studios are so dead that they're secretly hosting bar mitzvahs and pickleball tournaments for $1500 a day just to pay property taxes.

My studio is surprisingly busy but I'm cheap and cater to non-union folks with otherwise full time jobs.

Comment Re:That's because you don't understand (Score 1) 135

Some are. I work more with smaller businesses than Big Tech and I don't think we've ever had more interest in our software development services.

There is a rational concern that technical people will understand the benefits and limitations of generative AI but management and executive leadership will fall for the hype because it was in the right Gartner quad or something and that will lead to restructuring and job losses. Businesses that get that wrong will probably be making a very expensive mistake and personally I'm quite looking forward to bumping our rates very significantly when they come crying to people who actually know what they're doing to clean up the mess later. It's not nice for anyone whose livelihood is being toyed with in the meantime, obviously, but I don't buy the arguments that this isn't fundamentally an economic inevitability as the comment I replied to was implying.

Comment Re:That's because you don't understand (Score 1) 135

Historically and economically, it is far from certain that your hypothetical 20% increase in productivity would actually result in a proportionate decrease in employment. Indeed, the opposite effect is sometimes observed. Increased efficiency makes each employee more productive/valuable, which in turn makes newer and harder problems cost-effective to solve.

Personally, I question whether any AI coding experiment I have yet performed myself resulted in as much as a 20% productivity gain anyway. I have seen plenty of first-hand evidence to support the theory that seems to be shared by most of the senior+ devs I've talked with, that AI code generators are basically performing on the level of a broadly- but shallowly-experienced junior dev and not showing much qualitative improvement over time.

Whenever yet another tech CEO trots out some random stat about how AI is now writing 105% of the new code in their org, I am reminded of the observation by another former tech CEO, Bill Gates, that measuring programming progress by lines of code is like measuring aircraft building progress by weight.

User Journal

Journal Journal: Pope Leo XIV's first challenge: Justice

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.pillarcatholic.com%2Fp%2Fwhats-going-on-in-cardinal-prevosts

Pope Leo XIV has a unique chance to stand with victims, and flip the script by flying to Peru soon to testify against Fr. Eleuterio Vasquez Gonzales

Doing so will let him purge the sodomites. Doing so will send a message to all, that Cardinals will no longer be given the red hat to escape justice.

Comment Re:Weather: several times a day (Score 1) 102

I trust Weather Underground instead- it's private citizens, not government propaganda, and it's far more accurate.

Post-COVID, I don't trust governments or corporations to do science. I saw too much statistical abuse, p-hacking, politics, and outright lying about the scientific method to trust federal funding OR corporate funding of science.

Science is best done by private citizens funding their own experiments with outside jobs, not academic peer-review cancel culture bubbles.

Comment Re:BS (Score 1) 149

LLMs perform very well with what they've got in context.

True in general, I agree. How well any local tools pick out context to upload seems to be a big (maybe the big) factor in how good their results are with the current generation of models, and if they're relying on a RAG approach then there's definitely scope for that to work well or not.

That said, the experiment I mentioned that collapsed horribly was explicit about adding those source files as context. Unless there was then a serious bug related to uploading that context, it looks like one of the newest models available really did just get a prompt marginally more complicated than "Call this named function and print the output" completely wrong on that occasion. Given that several other experiments using the same tool and model did not seem to suffer from that kind of total collapse, and the performance of that tool and model combination was quite inconsistent overall, such a bug seems very unlikely, though of course I can't be 100% certain.

It's also plausible that the model was confused by having too much context. If it hadn't known about the rest of the codebase, including underlying SQL that it didn't need to respond to the immediate prompt, maybe it would have done better and not hallucinated a bad implementation of a function that was already there.

That's an interesting angle, IMHO, because it's the opposite take to the usual assumption that LLMs perform better when they have more relevant context. In fact, being more selective about the context provided is something I've noticed a few people advocating recently, though usually on cost/performance grounds rather than because they expected it to improve the quality of the output. This could become an interesting subject as we move to models that can accept much more context: if it turns out that having too much information can be a real problem, the general premise that soon we'll provide LLMs with entire codebases to analyse becomes doubtful, but then the question is what we do instead.

Comment Re:BS (Score 1) 149

I could certainly accept the possibility that I write bad prompts if that had been an isolated case, but such absurdities have not been rare in my experiments so far, and yet in other apparently similar scenarios I've seen much better results. Sometimes the AI nails it. Sometimes it's on a different planet. What I have not seen yet is much consistency in what does or doesn't get workable results so far, across several tools and models, several variations of prompting style, and both my own experiments and what I've heard about in discussions with others.

The thing is, if an AI-backed coding aid can't reliably parse a simple one-sentence prompt containing a single explicit instruction together with existing code as context that objectively defines the function call required to get started and the data format that will be returned, I contend that this necessarily means the AI is the problem. Again I can only rely on my own experience, but once you start down the path of spelling out exactly what you want in detail in the prompt and then iterating with further corrections or reinforcement to fix the problems in the earlier responses, I have found it close to certain that the session will end either unproductively with the results being completely discarded or with a series of prompts so long and detailed that you might as well have written the code yourself directly. Whatever effect sometimes causes the these LLMs to spectacularly miss the mark also seems to be quite sticky.

In the interests of completeness, there are several differences between the scenario you tested and the one I described above that potentially explain the very different results we achieved. I haven't tried anything with Qwen3, so I can't comment on the performance of that model from my own experience. I was using local tools that were handling the communication with (in that case) Sonnet, so they might have been obscuring some problems or failing to pass through some relevant information. I wasn't providing only the SQL and the function to be called, I gave the tool access to my entire codebase, probably a few thousands lines of code scattered across tens of files in that particular scenario. Any or all of those factors might have made a difference in the cases where I saw the AI's performance collapse.

Comment Re:I for one am SHOCKED. (Score 1) 52

You don't appear to consider the cost to everyone who didn't buy the glasses, but encounters someone wearing them.

This is the thing that people saying things like "You have no reasonable expectation of privacy in public" seem unable to grasp. There is a massive and qualitative difference between casual social observations that would naturally occur but naturally be forgotten just as quickly and the systematic, global scale, permanently recorded, machine-analysed surveillance orchestrated by the likes of Google and Meta. Privacy norms and (if you're lucky) laws supporting them developed for the former environment and are utterly inadequate at protecting us against the risks of the latter.

And it should probably be illegal to sell or operate any device that is intended be taken into private settings and includes both sensors and communications so that even in a private setting the organisations behind those devices can be receiving surveillance data without others present even knowing, never mind consenting.

Perhaps a proportionate penalty would be that the entire board and executive leadership team of any such organisation and a random selection of 20 of each of their family and friends should be moved to an open plan jail for a year where there are publicly accessible cameras and microphones covering literally every space. Oh, and any of the 20 potentially innocent bystanders who don't think that's OK have the option to leave, but if they do, their year gets added to the board member or executive they're associated with instead.

Slashdot Top Deals

Nature always sides with the hidden flaw.

Working...