Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Fuck this administration (Score 4, Insightful) 224

People have been leaving ever since the pandemic normalized remote work. Many other parts of the world are a whole heaping lot cheaper to live in than the USA and just as nice or nicer in various ways. So, people in a position to be able to remote work have high incentive to bolt, and the tech that supports remote work is here to stay (unlike Trump).

If you are looking for political motivation to leave, though, that door swings both ways. There is a conservative culture in the country that thinks liberals have attained far too much power and are ruining it, and fully expect a huge liberal pushback in the very near future. And there is a liberal culture in this country that (much like you) sees the Trump administration as the harbinger of the end times. The extremists on both sides can't abide each others' existence, and they make a lot of noise about it, so that friction is probably also driving some departures on both sides.

Comment Re:If your boss is forcing you to use AI (Score 1) 93

If they are not outright trying to replace their staff with AI, it is only because they believe AI isn't ready yet.

Nothing would please them more than being able to fire their entire staff and just command a virtual AI assistant to run their entire business for them, while they keep all that salary-money for themselves instead. This is 100% their goal.

Any issues about poverty or not having anyone who can buy their products or what-not are political matters that will be solved in political forums, so they absolutely will NOT waste a penny more than they need to on human labor.

Comment Re:If your boss is forcing you to use AI (Score 5, Insightful) 93

More realistically, they believe that using AI means "getting more work done faster." They take that as gospel truth with no qualifiers.

So, if you aren't using AI then clearly you are wasting company time and money, and hence shouldn't be promoted and maybe should be "transitioned out."

But they are making the obvious mistake of turning a metric into a goal. Employees will game the system. People will "engage with AI" to hit their numbers without using it in a useful way that saves time, especially if they are working on projects which, due to the specifics of the project, AI can't help with.

So, all this will really do is eliminate the honest and talented employees in favor of ones who can't succeed without AI (due to lack of talent and knowledge), and/or are willing to use it deceptively to advance their position.

Are those the kind of people you want working for you? For big corporations, yes, since those are the kind of people who are most similar to corporate leadership in terms of talent and ethics.

Comment Re:Autoplay video (Score 1) 64

Slashdot has made it clear that they are not content with the money that can be made from the tame, well-behaved ads that ad blockers allow through, and they must impose obnoxious adds on us that ruin our reading experience and expose us to risks of malware.

I have noticed that any time they manage to skirt AdBlock Plus, after a few days AdBlock Plus blocks their trick again. It's a cat-and-mouse game I suppose.

The simple truth is none of us need Slashdot. This site just aggregates news from other sites. We can simply browse the original sources. Or even browse Slashdot in a broken state where we can't post comments. It might deny our egos an opportunity to feel like we just put someone else in their place, but honestly, that's an experience we can live without.

Comment Re:Old Economy (Score 2) 19

Lawsuits are flying from unauthorized use of copyrighted works to train the LLMs, communities are uniting to block data center construction, audiences are fiercely rejecting AI-generated content in various forms of media, prestigious law firms are getting slapped-down by judges for using AI-generated hallucinations in their court filings, students are using it to cheat on homework, creative workers of all varieties hate it for the threat it poses to their job security, and the world is drowning in slop.

It is easy to see why people would be interested in a fund like this. And just as easy to see why people would believe that AI is doomed and the AI bubble is bound to pop catastrophically sometime soon. I previously predicted a bubble pop myself.

But there is a flip side. AI has been usefully applied in many places across many industries. When it is not unwisely applied in ways that make hallucinations harmful, it can actually do valuable work. So, AI is here to stay.

There might be a market adjustment, and it might even happen this year, but I don't think the global economic meltdown from the biggest bubble-pop ever is actually going to happen. Though over-valued, the big tech companies that are dominating the SnP500 right now are actually delivering something useful, so even if they sink a bit, they will not crash and burn.

Well, OpenAI might, but that's mainly because it has been outclassed by its competitors and has no real business plan, as was reported this very day right here on Slashdot. Others will survive, though.

Comment Re:Mostly agreed, but... (Score 4, Informative) 53

If you are building solutions in the Microsoft Azure Cloud, it is very easy to get immediate access to GPT models to power your AI pipelines (whatever they may be). Very affordable, too.

By contrast, you cannot gain access to the Gemini models, and there is a big hill to climb to gain access to Claude. I don't know about Meta's models, I never checked.

My point being, this is a bit of a vendor lock in that makes use of GPT models the path of least resistance for many businesses that are building AI powered solutions. Maybe that will help. Though I think not for long.

GPT models are weaksauce compared to Gemini and Claude. They have been very far surpassed by these. Businesses that really need the power of these other models can use Google Vertex and integrate that with their Azure cloud, or set up an Anthropic account and just beam the web requests right over. Anthropic is problematic in that it doesn't allow you to ensure that data never leaves specific global regions (which many people need for legal reasons), but Google Vertex sure does.

So, I think that advantage that OpenAI currently has will not last long.

It is sad to see an innovator lose out, but that is also how things normally go. We tell ourselves feel-good stories about how copyright law or patent law can protect the small innovator against the huge corporations, but that isn't how things play out in realty. By hook or by crook, the major players wind up leveraging what they have to get control over the shiny new thing, and that's how the cookie crumbles.

Comment Re:Deeper than food safety (Score 1) 209

A common complaint that arises when discussing pollution is "we ordinary people cannot do anything to significantly reduce pollution. It's all on the big corporations!"

Well, here is something you CAN do: eat lab-grown meat instead of regular meat. It hugely reduces the pollution for which you are financially responsible.

So here is a new opportunity for environmentalists to soul-search. How devoted are you? Are you ready to put your money where your mouth is (so to speak)?

Comment Re:Well then, (Score 1) 23

how governments around the world continue to push the narrative that they are servants of the people

When they bother to push this false narrative at all, they usually go with utilitarianism. They maintain that a lot of people benefit a lot from the presence of the data centers, and that outweighs the few people who suffer a little from increased utility costs.

They could also go with "rich people are people too, and they are obviously more important than poor people, so serving the interests of rich people IS serving the interests of the people." Though saying that sort of thing out loud tends to reduce their level of voter support, so the smart ones avoid it.

Comment Re: I wish that... (Score 2) 147

Maybe "programmers" come in different tiers, and AI can only replace the lower tiers. Like, programmers who can only implement simple generic code when given clear instructions (but cannot design solutions themselves and cannot debug very well) would be the bottom tier, and perhaps AI will be able to completely replace them, but still not able to replace higher tier programmers.

If we make some more distinctions:

Mid-tier: can independently design and implement business solutions using existing tools and frameworks.
Upper-tier: can build and optimize the frameworks, tools, operating systems, etc. that other developers can use
Master-tier: can implement and improve upon AI systems.

Hypothetically speaking, someday we might be able to produce AI that can replace lower-upper tier programmers without being capable of replacing Master-tier programmers. Then there would still be work for humans in the field, but not very much. The majority of what humans are doing now could be automated, without necessarily creating a self-evolving system that will ascend to godhood.

Comment Re:Copyright infringes my rights⦠(Score 2) 52

I think it is interesting that you got a troll mod, given the popularity of such notions as "sharing is caring" here on slashdot.

It may be that we are progressing into a copyright-free world, and are just beginning to feel the growing pains that come with that adjustment.

It is popularly believed that copyright benefits the independent creator since it gives them legal protection against big corporations who would violate that copyright, but this has been repeatedly disproved, especially recently with big-and-rich corporations helping themselves to everything they see on the Internet to train their AI, and getting away with it.

But even before that, the majority of copyright licenses have been held by a small conglomerate of rich elites, and NOT by the creators who create the works. In order to have a prayer at making money off your talents, you have to sign those rights away. But now that we have other mega-rich people that see a real path to riches from flagrant disregard of copyright law, we are set to watch a clash of titans over the issue.

The one thing that won't influence the outcome at all will be the opinions that the majority of people hold on the issue, since the majority of people are too poor to matter.

Comment Re: AI Hype needs money (Score 1) 106

I also wonder if their new definition of "best developers" is "developers who rely entirely on LLMs for coding."

With that semantic shift in place, they can hire new cheap greenies who rely entirely on LLMs because they can't code, and who do nothing but cause trouble for the actual competent developers who are manually fixing everything they break, and spin it to sound like progress.

Comment Re:AI Hype needs money (Score 4, Interesting) 106

The experiences reported in these articles are so utterly unlike the ones I have using AI to generate code. It HAS gotten better in the last year, but it is still no where near this capable, for me.

If I give it too many requirements at once, it completely fails and often damages the code files significantly, and I have to refresh from backup.
If I give it smaller prompts in a series, doing some testing myself between prompts, there is usually something I need to fix manually. And if I don't, and just let it successfully build on what it built before, the code becomes increasingly more impenetrable. The variable names and function names are "true" but not descriptive (too vague, usually) and when those mount up the code becomes unreadable. It generates code comments but they are utterly worthless noise that point out the outright obvious without telling you anything actually useful. When new requirements negate or alter prior ones, the AI does not refactor them into a clean solution but just duplicates code and leaves the old no-longer-needed code behind and makes variable names even more weird to make up for it. The performance of the code decays quickly. And on top of all this, it STILL can't succeed at all if you need to do anything that is a little too unique to your business needs. Like a fancy complex loose sort with special rules or whatever. It tries and fails, but tells you it succeeds, and you get code that doesn't work.

Sometimes it can solve surprisingly hard problems, and then get utterly stuck on something trivial. You tell it what is wrong and it shuffles a lot of code around and says "there, fixed" and it is still doing exactly what it did wrong before.

I have good success getting new projects started using AI code generation. When it is just generating mostly scaffolding and foundational feature support code that tends to be pretty generic, it saves me time. But once the aspects of the code that are truly unique to the needs start coming into focus, AI fails.

I still do most of my coding by hand because of this. I use AI when I can but once this stuttering starts happing I drop it like a hot potato because it causes nothing but problems from then on.

I simply don't see how the same solution could reliably make consistent and significant changes to a codebase and produce reliable, performant, or even functional code on an ongoing basis. That hasn't ever worked for me and still doesn't, even with the latest gen AI models.

Comment Re:access millions of computers and devices (Score 2) 54

When morally upright people with some technical competence discover an exploit that can be used as a backdoor, they report it to the vendor so it can be fixed. They don't report it on public media, so the vendor has time to fix it before criminals learn about it, thus protecting everyone who is already using the software. And, in turn, the morally upright and competent software vendor actually prioritizes it for a speedy fix, and does not have the reporter arrested and charged with criminal hacking.

But wealth and power tend to rob one of the "morally upright" aspect. So, when government agents discover exploits, they immediately weaponize them and keep them a secret, use them for nefarious purposes under the veil of government secrecy, and lie to everyone as needed. Similarly, tech company leaders shoot the messenger in a misguide effort at mending their wounded pride.

These facts motivate people to not bother, and enable evil to thrive.

Comment Re:Congratulations (Score 1) 162

It seems the intent of my original post was not clear.

Yes I know Libre Office exists, it's the one I use at home. My point was that Big Tech is "all in" on AI across the board, driven by an obvious eagerness to eliminate human software developers from the creation process. They see only the money they can save.

But the consequences of actually achieving such a goal would undermine their own business models. The reason why they would no longer need programmers is the same reason why no one would need their products.

We are still nowhere near that point yet, despite the enthusiasm that they are trying to drum up with stories like this one. Their agents can create a C compiler. Well today I asked cursor to move several methods from a file that had gotten too big out to a separate class, making the methods public and static in the process, and updating references. This entire operation involved a grand total of two code files and barely any "thinking."

It started generating powershell scripts to do batch operations on the files and screwed that up, wiping them out entirely, then tried to retrieve copies from git which wasn't set up for this project, and started showing inner text generation about trying to reconstruct the files just from the content in the chat history, when I stopped it. I had the whole thing backed up because I am no fool and also the Cursor interface gave me an undo button which worked.

So, these AI that are so capable they can create C compilers can't even move a handful of methods from one file to another, without destroying the whole thing.

AI is nowhere near ready to replace us.

Comment Re:Congratulations (Score 0) 162

When will 16 AI agents be able to code me up a Word processor with features equivalent to Microsoft Word?

Because once they can do that, people can stop buying Office and just vibe up their own versions. So long as the agents can implement standard file formats, the differences in implementations won't matter.

An interesting future is being teased here; one in which the only tech giants remaining will be the makers of AI, and everyone else will just vibe up all the software they now pay through the nose to get.

Slashdot Top Deals

One has to look out for engineers -- they begin with sewing machines and end up with the atomic bomb. -- Marcel Pagnol

Working...