Comment Read the headline wrong (Score 1) 32
I thought they were putting robot-controlled humans in the fighting ring.
I thought they were putting robot-controlled humans in the fighting ring.
Indian homes are not usually insulated well or at all and are typically very exposed to elements in key areas with sub par windows and doors. This is why homes quickly heat up again after switching off the AC. Leakage is real. Not just true for Indian but for so many countries in Europe, Asia, Africa, America.
Insulate like you live in Scotland, then no problems
MMT, heterodox and Post-Keynesians is where the serious analysis is happening and where smart money is looking to for predictive power.
It's easy to regulate AI art the state level.
"Any job offer for a job based in California must adhere to the following AI disclosure".
"Any mortgage offered in a Californian property must satisfy the following AI disclosure"
etc.
AI regulation need not be about regulating AI innovation; it's enough merely to make sure it's applied fairly. And almost all real-world applications are indeed local.
Does MS not have such agreements in place?
I used to work at Microsoft. My employment contract specifically called out a load of personal pre-existing projects, plus ongoing and future ones, and stipulated that MS would have no ownership nor claim. I did ask for these callouts, but they were happy to go along.
I agree with most of your post, but in America there is practically zero leftism being taught in schools.
If you think leftism is being taught in schools, then you either are making unfounded assumptions about what is being taught, or you do not know what leftism is.
Perhaps it would be more productive to engage in conversations and activities that strive to correct failures, than to blame the person for their education over which they had no control.
I'm a software developer. Part of AI is like if I had 200 interns working for me -- some of them smarter than me and already more knowledgeable about some areas, some of them not, none of them familiar with my team's codebase. There are real cases where I could get those 200 interns to do real useful work and would want to! e.g. if I create a very detailed playbook of how to make certain code improvements, ones that wouldn't be worth my time to do myself one-by-one, but if I had 200 interns and an automated way to verify that they did a good job, then sure!
The article says "manage a team of AI agents". Managing in this sense isn't like managing a human; it's like writing a shell-script to manage some bulk process.
Is there a practical home-use for an 8k monitor/TV?
I think there is for sports. Watch soccer on a 4k TV. The camera is usually pulled back far enough to see a lot of the field, so each individual player on a 4k screen (3840x2160) is about 150 pixels tall, and the number of their jersey is about 30 pixels tall. That's usually not enough for me to make out what's happening. I can make it out better live in person. An 8k screen I think would be enough to make it out. I'd sit closer to it than your 8' if I wanted to watch. (Likewise, at IMAX I like to sit about 5 rows from the front so the screen fills my peripheral vision).
On a deeper level, we DO have a name for what LLMs do to generate code: Cargo Cult Programming.
I'm a senior developer and use LLM assistance multiple times an hour. >90% of the time I find something valuable in what it produced (even though rarely accept what it suggested directly, and I often write or rewrite every single line).
So what value does it have if I'm not accepting its code overall? Lots of value....
1. As a professional I produce code that (1) I can reason about why it's correct in all possible environments, (2) I'm confident that the way I've expressed it is the best it can be expressed in this situation. The LLM can spit out several different ways of expressing it, helping me assess the landscape of possible expressions, allowing me to refine my evaluation of what's best. (It doesn't yet help at all with reasoning about correctness).
2. About 10% of the time I accept some of the lines it suggested because they save some inescapable boilerplate. Or it spits out enough boilerplate to tell me hey, I need to invent an abstraction to avoid boilerplate here. I'd have gotten there myself too, just slower.
3. Sometimes I find myself learning new idioms or library functions from its suggestions.
I think management is right to be AI crazy. LLMs have increased the rate at which I solve business needs with high quality code, and I think my experience generalizes to other people who are willing to take it on and "hold it right". (Of course, there'll be vastly more people who use it to write low quality code faster, and it'll be up to management to separate the good from the bad just like it always has been.)
when I ask my LLM overlord to accomplish the same task it gets really close but has a bug or two.
The way I use it: I write a method stub with just signature and docstring, or a class stub, then ask the LLM to flesh it out.
Do I ever use what the LLM produced? -- never; I always discard every single line it produced, and supply my own.
Do I ever benefit from what the LLM produced? -- usually yes, about 90% of the time. It shows me fragments or idioms or API usages that maybe I didn't know about, or it shows me a cumbersome approach which tells me where to focus my efforts on finding something more elegant. Often I'll copy/paste some expressions or lines from it into my solution. About half the time I follow up with further prompts, e.g. "The code you wrote ; please rewrite it to
When I'm writing code, I'm kind of proud of my professional skill, and for every single line I produce I asked myself (1) can I prove that this line is correct? (2) am I confident that this line/method/class/file/subsystem is the optimal way to achieve what should be done? Having the LLM spit out its (non-optimal) solutions helps me assess the design landscape, is a cheap way to see other ways to achieve what should be done and hence improve my judgment on whether mine is optimal.
That sort of scaling is not available on settings. Macos seems to only turn that on for 4k.
I have a Dell P322QE with native resolution 3840x2160, i.e. 4k. But MacOS still doesn't turn on proper scaling for it.
> System Settings -> Displays -> Larger Text vs. More Space. It changes scaling, not the resolution.
What you describe changes the RESOLUTION (at least it does on my mac connected to my Dell P3222QE). Indeed when you hover over one of the icons in "larger text vs more space", a little hover text shows the resolution that it's going to pick.
And if you click "Displays > Advanced > show resolutions as list" then it replaces those "Larger text vs more space" options with a dropdown of available resolutions.
> What on Earth are you on about? MacOS has better scaling than any other OS out there.
I don't think so? I'm 50yo so my eyesight is getting worse. I need large text to be able to read it. My monitor resolution is 3840x2160.
When I used to use Windows, to get everything in large text (menus, dialogs, prompts,
On Mac to get everything in large text, the only option I have is to bump down the resolution, currently 3008x1692 but I'll probably have to go to 2560x1440. This gives me the large fonts I need. But it's at a lower resolution, so the fonts look pixelated, and pictures can't be displayed in as much detail.
Did I understand you wrong? Is there some other way to get MacOS to have nice scaling? I haven't found it, but I'd dearly love to.
You can't manage an AI. That doesn't make sense. It's like managing a hamster or a dolphin or a horse. The only thing you every manage is the *humans* who wrangle the AI/hamster/horse.
10 to the 6th power Bicycles = 2 megacycles