Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Yes, but (Score 1) 23

The devices with screens (that can show you ads) tend to redirect you to an external website, whereas the hockey puck screenless devices tend to answer the question directly. We don't use the screen devices for much beyond turning lights on and off anymore as their answers involve looking at the screen. I'm looking forward to switching to a separate private hosted llm solution as soon as good hardware becomes available that's not a raspberry pi in a 3d printed case

Comment Re:Malthus was wrong. (Score 1) 242

In my macro economy classes, it has been repeatedly "proved" that having less population is beneficial for the wealthy. My inner conspiracy theorist is 100% sure that the whole LGBTOMGWTFBBQ propaganda is mostly there because THEY decided that promoting anything other than a traditional family, which known to be the most children-friendly, will make them richer in the end. My inner skeptic is unsure, but acknowledges that the conspiracy theorist has a point.

Comment Re:Normal in software (Score 1) 141

>VW isn't doing anything untoward here. You get exactly what you paid for.

Many, and I, would argue that they are doing something 'untoward.'

They are charging for something that doesn't have any cost to them.

They are intentionally crippling their models to produce extra revenue.

That sucks, and I and people like me don't like it.

Comment IMO the better question to ask is (Score 4, Interesting) 80

which qualities should a programming language / platform have so that it can benefit more from LLM-based tools?

I asked ChatGPT and here is how it ranked TIOBE's top 20:

Top tier -- AI-friendly (9 - 10/10)
These languages get the biggest immediate productivity boost from AI: large corpora, great tooling/IDEs, REPLs or declarative style, and stable ecosystems.
* Python (TIOBE #1) -- huge training corpus, ML ecosystem, REPL + notebooks -> excellent NL-> code and copilots.
* JavaScript (TIOBE #6) -- massive web ecosystem, short idiomatic snippets that LLMs do well at.
* C# (TIOBE #5) -- Roslyn + mature tooling give strong static-analysis signals for safe code generation and refactors.
* SQL (TIOBE #12) -- declarative queries map exceptionally well from natural language; high precision generation.

Strong tier -- Very good fit (7 - 8.9/10)
Large ecosystems and/or excellent tooling -- AI helps a lot, but either low-level detail or domain specifics require caution.
* Java (TIOBE #4) -- huge corpus + static typing reliable refactors, test generation; boilerplate can be an issue.
* C++ (TIOBE #2) -- massive codebase; templates and patterns help, but low-level UB and build/ABI complexity reduce autonomous use.
* C (TIOBE #3) -- plentiful examples, but pointer/UB and platform specifics make fully automatic changes risky.
* Go (TIOBE #8) -- simple, opinionated style and gofmt = AI produces readable, idiomatic code reliably.
* Rust (TIOBE #18) -- excellent compiler diagnostics let AI produce correct-by-construction fixes; steeper semantics but very promising.

Mid tier -- Useful but with caveats (5.5 - 6.9/10)
AI helps productivity (snippets, prototyping, migration), but domain-specific tooling, legacy idioms, or small corpora limit scope.
* Visual Basic (TIOBE #7) -- large legacy base and lots of examples; useful for migrations and automation but inconsistent modern tooling.
* Perl (TIOBE #9) -- expressive one-liners & CPAN help, but varied idioms make safe generation harder.
* Delphi / Object Pascal (TIOBE #10) -- legacy GUIs and embedded uses -- AI aids porting and snippets.
* PHP (TIOBE #15) -- huge web examples/frameworks; dynamic quirks can trip up blind generation.
* R (TIOBE #14) -- great for data tasks; AI helps plotting/analysis code but less so for large engineering systems.
* MATLAB (TIOBE #16) -- numeric/algorithm prototyping benefits from AI, smaller ecosystem for broader tooling.

Lower tier -- Niche / harder for LLMs (3.5 - 5.4/10)
Specialized domains, niche audiences, or small indexed corpora reduce immediate AI impact.
* Fortran (TIOBE #11) -- legacy scientific code -- good for modernization tasks but limited general tooling.
* Ada (TIOBE #13) -- safety-critical focus and niche community; compiler checks help, but corpus is small.
* Kotlin (TIOBE #19) -- modern language with good IDEs, but smaller training footprint vs the biggest languages.
* Scratch (TIOBE #17) -- educational/block language: AI can create lesson content, but not much production automation.

Legacy / very low-benefit tier (0 - 3.0/10)
Very context-sensitive, architecture-specific or educational-only languages where generic LLM assistance is least useful.
* Assembly language (TIOBE #20) -- highly architecture- and context-specific; AI can suggest patterns but needs deep hardware context.

would you agree with the LLM's "logic"?

Comment Re:Africa Least Distorted and Centred (Score 1) 259

>The notion that because it makes africa look smaller than it is relative to other countries this makes africa less relevant makes no sense whatsoever.

It makes a LOT of psychological sense. Too many people in the world think that is the actual area of the earth that each occupies, and land size for a country IS important. Try to tell me otherwise and I'll laugh in your face. Just the minerals below the surface are worth it, not to mention potential crop or people lands.

To portray Africa as smaller than it actually is lessens it's importance in MANY people's minds.

Comment Re:Crash and burn, or rise and conquer (Score 1) 56

A lot of these smaller 3-12 person companies will develop some proprietary tech on top of (probably agnostic) state of the art models and rather than sell the product they just maintain it for their existing customers as a professional services company. Something that solves a difficult to solve problem, but has to be uniquely wired into each client's system differently, and then add/tweak features per customer.

Comment Re:I'll leave for another product (Score 1) 49

> It also shows me that locally running LLM are not nearly as bad as some portray these to be.
 
Well, also, things have improved a lot since last summer, and there's been a lot of work lately on using archetctutal strategies in flagship models, in small and micro models. A 270m model today is about as good as a 4b model was two years ago. Back then a 4b model couldn't write a haiku or sonnet, now a 270m will at least make an attempt on par with a 6th grader even if it isn't exactly perfect. 270m sure seems like the lower limit to get coherent responses from and do basic analysis. But it's enough to do tasks like check the time weather and date, set timers and alarms, and know when it's out of its depth and hand off to a larger model. 270m is probably just small enough to run on a raspberry pi with half a gig of RAM

Comment Re:I'll leave for another product (Score 1) 49

You can self host a pretty competent llm on an 8gb gpu and a real competent llm on a 16gb these days. A 30b model might only hold a couple layers in gpu memory but still be fast enough for daily use. Pretty soon if you have 64gb system and 16-32gb gpu you'll be able to run in a year or two summer-2025 state of the art models locally. You can already run Winter 2024 state of the art models on consumer hardware. The only thing self hosted doesn't have access to yet is search tools

Comment Re:Of course (Score 5, Informative) 88

Videos need to be at least N length to qualify for different tiers of ads and sponsorship. Yamering on for a minute about what the video is about, for whatever reason, helps keep people through that critical first 30 second period, which boosts your viewership and tells the algorithm your content is worth watching. If the video has over 10k views you can pretty much always skip over the first 30-65 seconds

Comment Re:Threaten = lie (Score 1) 111

How is his complaining going to get his app listed higher?

If I were Apple, I would say "Screw you." In legalese, of course.

Dare him to sue, and countersue for damages of the frivolous lawsuit and "big political stink" damaging Apple's reputation.

He has no ground to stand on, he is being a whiny little bitch.

Slashdot Top Deals

Hackers of the world, unite!

Working...