Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re: The party of small government (Score 2) 108

It's easy to regulate AI art the state level.

"Any job offer for a job based in California must adhere to the following AI disclosure".

"Any mortgage offered in a Californian property must satisfy the following AI disclosure"

etc.

AI regulation need not be about regulating AI innovation; it's enough merely to make sure it's applied fairly. And almost all real-world applications are indeed local.

Comment Re:No work agreement with MS? How could he? (Score 3, Informative) 37

Does MS not have such agreements in place?

I used to work at Microsoft. My employment contract specifically called out a load of personal pre-existing projects, plus ongoing and future ones, and stipulated that MS would have no ownership nor claim. I did ask for these callouts, but they were happy to go along.

Comment If you had 200 interns (Score 1) 56

I'm a software developer. Part of AI is like if I had 200 interns working for me -- some of them smarter than me and already more knowledgeable about some areas, some of them not, none of them familiar with my team's codebase. There are real cases where I could get those 200 interns to do real useful work and would want to! e.g. if I create a very detailed playbook of how to make certain code improvements, ones that wouldn't be worth my time to do myself one-by-one, but if I had 200 interns and an automated way to verify that they did a good job, then sure!

The article says "manage a team of AI agents". Managing in this sense isn't like managing a human; it's like writing a shell-script to manage some bulk process.

Comment Re:Practicality of 8k for most uses? (Score 1) 136

Is there a practical home-use for an 8k monitor/TV?

I think there is for sports. Watch soccer on a 4k TV. The camera is usually pulled back far enough to see a lot of the field, so each individual player on a 4k screen (3840x2160) is about 150 pixels tall, and the number of their jersey is about 30 pixels tall. That's usually not enough for me to make out what's happening. I can make it out better live in person. An 8k screen I think would be enough to make it out. I'd sit closer to it than your 8' if I wanted to watch. (Likewise, at IMAX I like to sit about 5 rows from the front so the screen fills my peripheral vision).

Comment Re:A question for AI crazy management. (Score 5, Interesting) 121

On a deeper level, we DO have a name for what LLMs do to generate code: Cargo Cult Programming.

I'm a senior developer and use LLM assistance multiple times an hour. >90% of the time I find something valuable in what it produced (even though rarely accept what it suggested directly, and I often write or rewrite every single line).

So what value does it have if I'm not accepting its code overall? Lots of value....
1. As a professional I produce code that (1) I can reason about why it's correct in all possible environments, (2) I'm confident that the way I've expressed it is the best it can be expressed in this situation. The LLM can spit out several different ways of expressing it, helping me assess the landscape of possible expressions, allowing me to refine my evaluation of what's best. (It doesn't yet help at all with reasoning about correctness).
2. About 10% of the time I accept some of the lines it suggested because they save some inescapable boilerplate. Or it spits out enough boilerplate to tell me hey, I need to invent an abstraction to avoid boilerplate here. I'd have gotten there myself too, just slower.
3. Sometimes I find myself learning new idioms or library functions from its suggestions.

I think management is right to be AI crazy. LLMs have increased the rate at which I solve business needs with high quality code, and I think my experience generalizes to other people who are willing to take it on and "hold it right". (Of course, there'll be vastly more people who use it to write low quality code faster, and it'll be up to management to separate the good from the bad just like it always has been.)

Comment Re:As a software engineer I hate to agree (Score 1) 163

when I ask my LLM overlord to accomplish the same task it gets really close but has a bug or two.

The way I use it: I write a method stub with just signature and docstring, or a class stub, then ask the LLM to flesh it out.

Do I ever use what the LLM produced? -- never; I always discard every single line it produced, and supply my own.

Do I ever benefit from what the LLM produced? -- usually yes, about 90% of the time. It shows me fragments or idioms or API usages that maybe I didn't know about, or it shows me a cumbersome approach which tells me where to focus my efforts on finding something more elegant. Often I'll copy/paste some expressions or lines from it into my solution. About half the time I follow up with further prompts, e.g. "The code you wrote ; please rewrite it to ."

When I'm writing code, I'm kind of proud of my professional skill, and for every single line I produce I asked myself (1) can I prove that this line is correct? (2) am I confident that this line/method/class/file/subsystem is the optimal way to achieve what should be done? Having the LLM spit out its (non-optimal) solutions helps me assess the design landscape, is a cheap way to see other ways to achieve what should be done and hence improve my judgment on whether mine is optimal.

Comment Re:OS X is a mess (Score 3, Informative) 138

> System Settings -> Displays -> Larger Text vs. More Space. It changes scaling, not the resolution.

What you describe changes the RESOLUTION (at least it does on my mac connected to my Dell P3222QE). Indeed when you hover over one of the icons in "larger text vs more space", a little hover text shows the resolution that it's going to pick.

And if you click "Displays > Advanced > show resolutions as list" then it replaces those "Larger text vs more space" options with a dropdown of available resolutions.

Comment Re:OS X is a mess (Score 3, Informative) 138

> What on Earth are you on about? MacOS has better scaling than any other OS out there.

I don't think so? I'm 50yo so my eyesight is getting worse. I need large text to be able to read it. My monitor resolution is 3840x2160.

When I used to use Windows, to get everything in large text (menus, dialogs, prompts, ...) I'd stay in 3840x2160 resolution and choose "large fonts 150% or 200%". The UI adjusts: I get the large fonts I need, but staying at native resolution, so all the lines+curves in the fonts are clear and crisp, and photos have all their detail.

On Mac to get everything in large text, the only option I have is to bump down the resolution, currently 3008x1692 but I'll probably have to go to 2560x1440. This gives me the large fonts I need. But it's at a lower resolution, so the fonts look pixelated, and pictures can't be displayed in as much detail.

Did I understand you wrong? Is there some other way to get MacOS to have nice scaling? I haven't found it, but I'd dearly love to.

Comment Re:Oh, well good (Score 1) 348

Now the paragons of morality who daily lecture me on the evils of ... sanity ... are literally fundraising for a murderer because they like his poiltics.

It's sloppy thinking to talk vaguely about "they", to lump different groups together, to take aspects of one group and project them onto another. The best defense against this sloppy thinking is intellectual self-honesty and precision.

Who exactly are the "they" you're talking about? The intersection of (1) "paragons of morality" which you probably intended sarcastically, (2) people who daily lecture me on the evils of sanity, (3) people who are fundraising for Luigi although I assume you meant "donate", (4) people who like Luigi's politics.

I suspect there are probably 0 people in the intersection of all those.

Comment Re:Make them valuable (Score 1) 30

I personally am a fan of peated Scotch whiskies, so if we were to invest heavily in opening new distilleries manufacturing more of those delectable libations, it would go a long way to ensuring the long term health of the world's peatlands. My liver is willing to make the necessary sacrifices.

What? Peated whisky is made by digging up the 1000-year-old peat. There's no capitalism/ownership solution in existence that protects investments with that long a timeline.

Comment Re:Why? (Score 1) 170

Why is there such a push to get Rust into the kernel? Memory safety? It's only memory safe up to the borders of the module, at which point you will need to make an unsafe call.

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fferd.ca%2Fcomplexity-has...

Complexity has to live somewhere. If your a skilled programmer then you find ways to keep the complexity in small well-defined parts of your codebase that don't leak out, are carefully documented, and are particularly well covered by tests. If you're not, then complexity spreads its way into all areas of your codebase.

Memory unsafety is very similar. And Rust has proved particularly good at keeping people honest about those "small well-defined parts" with respect to memory unsafety -- both keeping oneself honest, and also keeping all the other maintainers honest who aren't as familiar with the safety contracts of this particular area as you are.

(On the other hand, I totally agree with you about the hassle of maintenance languages)

Comment Re:what's the problem? (Score 4, Interesting) 170

Objecting to "enabling Rust-written device drivers to access the kernel's DMA API"? It sure seems innocuous to me. There's an API, Rust could have access to it, why is that a problem? This particular Hellwig guy, who apparently doesn't know Rust, would obviously not be asked to maintain any of the Rust code so why should he care?

The problem I heard last time around is that if someone wants to update the API (say) then they also have to update all callsites. Previously they could achieve this solely in C. Now they have to learn Rust as well to figure out how to update the callsites.

Comment Re:Well there's your problem (Score 2) 114

Stringing together concepts is reasoning. Concepts have semantic meaning by definition. To the LLM, the words are just tokens, which are statistically associated with other tokens. LLMs do not have any concept of the semantics behind those tokens, which we as thinking humans recognize as words.

That's a misleading way of describing LLMs. A better description is that each word in an LLM is a complex set of associations, 12,288 associations in GPT-3. Those aren't associations to other words though; in the first layers of the LLM they're associated to things like syntactic part of speech and referent; in the middle layers of the LLM they're associations to more complex things like "military bases" or "from Friday 7pm until"; in higher layers of the LLM they're associations to things like "the original NBC daytime version, archived” or “time shifting viewing added 57 percent". https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Farstechnica.com%2Fscienc...

How do we define semantics/meaning? We talk of either denotative or connotative meaning. Connotation "refers to the associations that are connected to a certain word or the emotional suggestions related to that word." http://www.broward.k12.fl.us/b...

LLMs operate upon the connotative meanings, according to the textbook definition. They have an odd and arbitrary way of processing that connotative meaning, but it's still what they're doing.

Slashdot Top Deals

GREAT MOMENTS IN HISTORY (#7): April 2, 1751 Issac Newton becomes discouraged when he falls up a flight of stairs.

Working...