Comment Re:Probably not (Score 1) 77
It's mostly about the terrains and houses being "boomer NFTs", rather than the cost of building new ones.
That's different from previous generations how?
It's mostly about the terrains and houses being "boomer NFTs", rather than the cost of building new ones.
That's different from previous generations how?
Asking the question we've all answered already twice a year since Slashdot was founded. Are there alternatives?
Nope, not as long as the stupid clock changing ritual continues.
End that, and we'll be happy to never discuss it again.
The alternatives are standard time year-round, which means that in the summer you have the sun blasting through your bedroom window at 5 AM in the summer instead of getting that light in the evening, or DST year-round, which means that in the winter it's pitch black when your kids go to school and you go to work.
None of the options are good, but DST is the least bad.
Well quite.
Many companies have trouble deploying a really basic CRUD website which has been solved technology for a few decades now. And we're supposed to expect them to have no problem deploying the still incredibly experimental tech of AI.
Likewise factories have enough trouble deploying basic PLCs (it doesn't help that anything except modbus and modbus/TCP is just awful), also decades old solved problems. But AI robots will magically be easy.
I think you failed to read the summary.
First of all, when researchers use Claude for security research, they basically always have the LLM not only find the bug but also validate it and even produce PoC exploit code. All of which the LLM can and does do, and far faster than humans.
Second, the Mozilla team definitely did determine that Claude was right and found its output far more useful than grep strcpy.
And when it comes to AGI I think stuff is muddy anyway. I have no idea if we will reach it and some doubts that we will need it.
It's definitely muddy, since we can't even define what AGI is with any precision. As for whether we'll need it... I mean, we have human intelligence, which we believe to be general, and find that pretty useful, so I don't know why AGI wouldn't be useful, especially if it's a lot smarter than we are.
It has a few nice thought experiments, but why do we need AGI when we have AI systems that do the same without being AGI?
Because they don't do the same without being AGI.
For our own safety as a species I think it's better to keep our tools sub-AGI and lacking in agency, but that doesn't seem to be where we're headed.
if we want to cater to people's "I don't like it dark in winter and I also don't like it bright in summer" then DST might indeed be a shitty solution but less shitty than others.
I think we're in agreement. I don't think DST is a particularly good solution, just less bad than the others.
It was amusing to think about gradually shifting the clock times, though. I even fired up Octave and did some calculations. I arbitrarily picked 39N as my latitude for optimization, and 6:30 AM as the earliest sunrise time, then calculated the daily offsets to see how the offsets would change and what the maximum sunset times would be. It turns out that DST is a pretty conservative option. My model had a maximum offset of 118 minutes, almost twice as much as the one hour for DST. The daily shifts were pretty small, though; the worst was September 26, which would be 105.6 seconds shorter than 24 hours. If the difference is smeared across the day, each second would be 1.2ms shorter.
The sunrise and sunset times for this scheme were quite nice. However, it obviously becomes incredibly hard for anyone outside the time-shifting regime to schedule anything with anyone inside. And you can't time-shift globally, because the shift optimized for the northern hemisphere is exactly wrong for the southern hemisphere, so you'd have to have two shifting regimes, north and south.
So the two hemispheres would gradually shift in and out of sync, coming together at the equinoxes and reaching 118 minutes difference at the solstices. This would be particularly weird at the equator, where two towns that straddle the equator would be sliding in and out of sync with their neighbors.
Anyway, it was a fun thought exercise, but not in any way practical.
That would, of course, be the worst decision since the 2008 GFC when banksters got bailed out.
The 2008 bank bailouts actually worked out extremely well. The bailouts were in the form of loans which were repaid on time -- mostly well ahead of time -- and with significant interest. The taxpayers came out well ahead in real dollar terms, even ignoring the question of what might have happened if Bush hadn't bailed them out. And macroeconomists are pretty confident that the result of not bailing them out would have been a depression.
I'm not saying this means government should bail Altman out -- and AFAICT Open AI is the only one of the three frontier labs that needs bailing out. Anthropic is in much better shape financially, and Google is obviously just fine. There's no reason for the government to bail out Open AI; they should just be allowed to fail.
The argument that government should take control of AI development for safety and national security reasons has more merit. Or at least it would if we expected government to manage it competently and responsibly.
Do you have a reference to that proof? As LLM are (similar to most other neural nets) general function approximators, it's unlikely they cannot be used to implement AGI, if AGI can be implemented using current computing paradigms.
My guess is that he's referring to Merrill & Sabharwal's results on circuit complexity. https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Faclanthology.org%2F2022...., https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Farxiv.org%2Fabs%2F2207.007... and https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpapers.neurips.cc%2Fpape..., are key results.
However, Merrill & Sabharwal also showed (https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Farxiv.org%2Fabs%2F2310.07923) that adding chain-of-thought capabilities breaks the previous results, so the earlier papers basically only apply to pre-2023 LLMs.
There is mathematical proof that LLMs cannot do AGI.
Bullshit.
For that to be true we'd first need a formal definition of what AGI is, and we don't have that, and don't have anything close to it.
My guess is that you're probably referring to the circuit complexity results by Merrill & Sabharwal. But those results specifically address only fixed-depth, fixed-precision transformations which means they don't apply to LLMs with chain-of-thought-augmented inference. That is, the proof doesn't apply to current LLMs, only to LLMs as they existed 3+ years ago. Also, the authors showed that even just augmenting LLMs with a scratchpad is enough to bypass the limitations.
If that's not the result you're talking about, tell me which proof you're referring to and I'll explain why it doesn't mean what you're claiming it means.
Yes the last paragraph is a tangent. Quite a lot of people seem to deeply believe in the power of corporate BS, and seem to genuinely think that if they browbeat people into not pushing back, or refuse to continue speaking then they have somehow won the argument and the victim now buys the line of BS.
Workers who can't spot obvious bullshit aren't so good at not producing obvious bullshit?
Well perhaps this is one of the cases where what feels obvious to me aligns with reality. Still worth looking because not all results are as expected.
The approach to corporobs never ceased to amaze me. Upper management would spew it, and when people got a chance to push back they'd spew more, and either keep going until the person got fed up talking to a recording or they just shut down the conversation. In both cases they appeared to believe that they had actually convinced someone.
Sure, you have the flexibility to shift gradually. I do, too. But not everyone has a flexible schedule, indeed most people do not.
Some office jobs and essentially all blue-collar jobs rely on people all showing up at specific, fixed times. Retail stores and any business that services walk-in customers have to have fixed, predictable hours. The wholesalers that those businesses buy their goods from have to have fixed, predictable hours -- and those relationships often cross time zones. Factories have to start and end shifts at predictable times. Freight has to depart and arrive at predictable times, which means that all of the people involved in moving that freight have to work at specific hours. Schools have to have specific, predictable times, so both teachers and students know to show up at the same times, and so that everyone who depends on them can schedule their own lives around those times.
Trying to use gradually-shifting start times for all of those organizations would be impossible. It would actually be more practical to gradually shift the clocks, to "smear" the time changes across a couple of months. Is that what you propose?
If there are measurements made via the gravitational field, you don't believe it unless somebody figures out how to replicate the measurement directly using electromagnetism. If there is a physical phenomenon in this universe that simply never interacts with the electromagnetic field, well then it just will never exist for you.
Well, it doesn't interact with the weak or strong force either in a way that's been measured yet, either. Neutrinos don't interact with the EM field but they are now routinely detected.
1. You all need to get over yourselves.
2. You desperately need better safety regulations.
3. No other southern states exist.
If you don't have time to do it right, where are you going to find the time to do it over?