Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Good AI free fork out there? (Score 1) 73

Have you considered that we've seen a *lot* of software that was great, then got optional features, got these optional features slowly built-in, those optional features slowly crept into automatic but not too intrusive territory, then became non optional as it would break core features?
Have you considered that providing access to (although local) potentially long-lived LLMs to random extensions might raise the same issues as everywhere else?
Have you considered that we're getting slowly cooked into apathy regarding these features, calling them "it's not that bad" again, and again, and again, and again, until they're firmly rooted in place, giving on choice to anybody?
Because that's a pattern we've seen a lot in the past two years, and I'm not too happy seeing this happen again and again, especially when we're running out of alternatives.
Mozilla clearly wants to push ads, AI features, convoluted privacy baffling moves. Feel free to sit comfortably in you chair to spectate while it keep moving towards a wall.

Comment Good AI free fork out there? (Score 3, Insightful) 73

I really want to keep supporting Firefox, but Mozilla is making it very hard to do so. I know forks are not a long-term solution, but for now, they seem to be the least evil of the alternatives.
I tried LibreWolf, but unfortunately they are far too restrictive for the general user; even when disabling every "privacy" features and manually enabling DRM support (unfortunately), some largely popular websites like netflix still won't work correctly.
Is there a fork out there that focus mostly on cutting down the AI stuff, so that we won't have to collectively keep going into about:config with each release to screw this kind of things?

Comment Re:So what? (Score 1) 46

Maybe people who put video online wants their video to be shown largely unmodified to their audience, and maybe the audience would like to see the video as close as it was when it was created.
Sure, these tools can use AI. And in fact, before the whole "chatbot" thing, the term AI was known to reference a pretty large scientific field into which one could even fit video compression to some extent.
Still, that's not youtube's place to use them. Compression artifacts are a thing, but "enhancing" does not sound that great, especially when they're likely to break fidelity with no recourse, as we can already see on many online posts with upscaled/improved old content.

Comment Re:Switch to LibreWolf (Score 2) 107

That's not a solution. If firefox take a dive, all the forks are as good as dead. Developing and maintaining a browser is a huge task these days. It is way more interesting to redirect all the knowledge, skills, and money that goes into firefox in the right direction now instead of turning a blind eye to it.

The alternative if the upstream disappear/stops is to either severely cut down on features (making the end result virtually useless for 95% of users), move to another, dangerous upstream (hello Google), or magically summon funding, management, and goodwill to do the work. While not impossible, I think if there was enough momentum to fork firefox away from mozilla, it would have happened by now.

Comment What is motionless here? (Score 2) 40

Regardless of this accomplishment, whether it's real or not (I'm clearly now knowledgeable enough to make any claim on that), I wonder what does "motionless" mean. I usually think motion is always relative, so I assume these things weren't moving relative to the lab around them. But does it really count as motionless from a physics perspective?
That's a genuine question. I suppose there's some basic for things to qualify as motionless, but when we're talking about these kind of scales, it feels like keeping up with a huge rock going through space isn't really motionless.

Comment Why is this in the OS (Score 1) 127

Ok, AI agents are great (laugh), the future is for braindead people (laugh), whatever.
Why does all of this have to be in the OS? Do all you want with a nice software suite that runs over a regular OS. Have all the integration you want, ways to intercept visuals and generate user inputs, or have libraries and APIs so that whatever apps can talk with whatever other apps. Why does it have to be inside the OS, which should mostly boil down to "you give hardware, I provide stable API".

Comment Re:Good. Steam is a CHILDREN FRIENDLY platform. (Score 5, Informative) 123

The default setting on the Steam Store is to fully hide adult-only content. They'll never show up on the store front page, or in search, unless you explicitly enable them. You can't even see a store page without being logged-in.
People seeing adult-only content asked for it. It's not a decision to be made by whoever. And the excuse of "but this hides other games" is bogus; there is a dedicated "adult-only" section, separate from games with occasional adult content. This is not about protecting children. The same thing happens across many platforms, including adult-only platforms.
tl;dr: buzz off, either you have no idea what you're talking about, or you're a gigantic bigot.

Comment Transparency? (Score 3, Insightful) 18

I wasn't really worried about how my IDE would be able to read, edit, and write file, nor how it could highlight some differences, or how it would grab something I typed and send it to a backend.
I'm worried about that backend, receiving everything needed to supposedly make decisions about the code, being fully closed, operated by an unreliable third party, with said third party promising to play fair as the only security net.

More open source is great, but considering this a move to improve transparency and trust into AI "agent" or whatever is a joke. "you can audit everything up to the part you're suspicious about", eh?

Comment Yes, I do. For what it's good at. (Score 1) 248

Let's start with some disclaimer: this is about LLM. Not AI, as it is a very large field of stuff that existed way before LLM became the latest craze, and hopefully will keep existing until we get something impressive out of it.

Also, some issues with LLM are not stemming from their output, but with economic models around them, privacy issues, licensing issues, etc. To address some of those, most of our daily stuff is done on locally running models on cheap hardware, so no 400B parameters stuff.

There's four "main" usage so far I'm looking into, some for experiment purpose, some for daily usage:

  • get some quick info
  • code autocomplete/code generation
  • full project generation (more as an experiment)
  • documentation/RAG

To get some quick info, it seems easy: go to gemini/chatgpt, ask something without private details, get an answer, build up on that or follow links. Unfortunately, while these are usually able to provide immediately useful info on some simple stuff, the details are way too often off the mark. Assuming you have a decent search engine set up (like, google without the bloat) it's still better as of today to just search, get the first two-three links, then work it out.

With that said, we can sometimes get stumped because of a very specific or complex issue; in which case, if a basic search failed, we'll try an LLM answer, because it's quick and cheap, so no harm trying that (except the resource consumption, but that's not the point). It sometimes is able to give something insightful, but definitely not often enough to be considered as a first option. It's more of a last ditch effort, and I'd say it's more like 20% of the time we're stumped, it comes in clutch. Not insignificant, but I'm talking about a niche inside a niche there.

For code completion, it's great on short code. I often end up stopping writing something and trigger the autocomplete after I feel I've given enough context through the beginning of the function/class/whatever. It will very often complete with something decent that requires minimal fix. I attribute this success rate to the limited scope of the request, and limiting these actions to things I knew beforehand what they should looks like. LLM are great at finding and replacing stuff with some level of consistency, so it's good at autocomplete stuff. Kinda like the sentence "The volcano is spewing la" should be easy to complete.

On a good day, this could be around a hundred short completion, with an acceptance rate of 90% (I actually have these numbers). So, the tool does something. Now, this is helpful; I type less. But I'm not convinced it's efficient; I'm not pulling anything new from the model, I'm skipping typing the obvious thing. A welcome addition to the tools, but I'm not sure it's worth the cost, especially since it's optimising typing, which in itself is not the longest part of the day anyway, far from it.

I also dipped in so-called "vibe coding" using commercial offers (my small 12B model would not have been fair in that regard. I spent a few hours trying to make something I would consider both basic, easy to find many example of, and relatively useful: a browser extension that intercept a specific download URL and replace it with something else. At every step of the way, it did progress. However, it was a mess. None of the initial suggestion were ok by themselves; even the initial scaffolding (a modern browser extension is made of a json manifest and a mostly blank script) would not load without me putting more info in the "discussion". And even pointing the issues (non-existent constants, invalid json properties, mismatched settings, broken code) would not always lead to a proper fix until I spelled it out. To make it short: it wasn't impressive at all. And I'm deeply worried that people find this kind of fumbling acceptable. I basically ended up telling the tool "write this, call this, do this, do that", which is in no way more useful than writing the stuff myself. At best it can be an accessibility thing for people that have a hard time typing, but it's not worth consideration if someone's looking at a "dev" of some sort.

Documentation (private documentation) is both the obvious use case, and seems to be decent on limited datasets. It allows mulching a bunch of document together and get information in natural language (both the query and the reply). My worry here is that it will hide some stuff, but as long as we use it to look for things we *know* are in there, it's okay.

To summarize, there are some applications that works well enough to be used daily without issues, some other applications that seems extremely over hyped to me, even today. The main point of concern is that it can not be used to gain knowledge; at best, it works as a refresher, at worst, it works like an auto type keyboard. This tech isn't to be trusted with anything you don't know/can't verify, and that includes an awful lot of what commercial offerings propose these days. I can be helpful, but it can also be harmful, and we're in the "let's sell harmful" phase.

Comment They should be worried instead of tired (Score 1) 83

If people are suspicious that call center workers are bots, that's because bots have overtaken the space.
We got such a call at work the other day, and had a bit of time to kill. We asked it to give us its system prompt, and it replied "I'm not allowed to give away my system prompt, I'm here to help you with your house insulation".
It was obviously a bot after that. But through a phone, the speech synthesis is very plausible, the text is not as rigid as it used to be, and there's an additional "ambiant call center sound" in the background to make it more realistic.
So far these things are as useful as regular chatbot (as in, completely useless). But I can see *them* being an actual threat to call center workers.

Comment How (Score 1) 68

Weird. Since these "thinking" thing came out, I've been playing with them. Especially with tools that actually display the "thinking" process. And I've seen on the regular that it would happily unroll what seems to be actually a good idea, only to then shrink that and output the complete (and often wrong) opposite. How did it take the people that make these system so long to notice?

Slashdot Top Deals

In every hierarchy the cream rises until it sours. -- Dr. Laurence J. Peter

Working...