What is the value of removing all humans from the loop?
There are a lot of advantages:
There are probably other advantages.
Don't forget to subtract the electricity used during the daytime to charge the battery.
Why? They came up with 2500 MWh produced and $80/MWh with no difference in price for electricity used directly and electricity used from the battery. There is also 1200 MWh capacity on the battery storage.
Consider:
2500 MWh - 1200 MWh = 1300 MWh during the day and 1200 MWh in battery storage. If you sell that at $80/MWh, that's $104K during the day and $96K at night from the battery, which adds up to $200K.
If you store only 600 MWh from the battery during the day that's:
2500 MWh - 600 MWh = 1900 MWh during the day which is $152K and then $48K during the night from battery. That adds up to... $200K again.
If you store nothing during the day that's:
2500 MWH - 0 MWh = 2500 MWh during the day, which is $200k and then $0 during the night from battery. That adds up, curiously again, to $200K.
Now, I get that there are more complicated models for this where there are different day and night rates and there are some losses from storing power to battery and using it later versus using it right at once, etc. However this was presented as a back of the envelope calculation with ballpark figures and those factors would pretty much just be a rounding error in such a thing. Otherwise, if we're just looking at a simple model of a set amount of electricity per day and a set price, there is no point in subtracting the electricity used during the day to charge the battery. Just like there is no point in subtracting inventory in a 24 hour store that you keep in a back room and restock around evening and then sell overnight.
There is, of course, overcapacity as a consideration. Do you end up producing more electricity than you can sell? However, that's not really a consideration of the model at this point either. The assumption is that the amount of electricity generated is close enough to the amount sold that the difference is pretty negligible.
Whatever the non-zero value is, natural gas would generate less greenhouse gas. It makes sense to displace oil first.
No actual disagreement with that as a general statement, but the GP pointed out just how low the usage actually is. Statements like the above have to have a point of diminishing return standard applied to them. With just about any effort you have to recognize that the last tiny bit may require a lot more effort or difficulty to achieve. As an example, consider the goal of removing 100% of the water from alcohol. You can approach that goal with multiple rounds of distillation, but the azeotropic limit will still leave at least 4.4% water no matter how much distillation you do (and it will still be almost impossible to reach even that just through distillation. There are other methods to go beyond that, but it gets more and more complicated.
So, for any such effort, there's a point you can label as good enough. That does not mean that you actually completely stop trying, just that if you have a list where you plan to complete one task before another, it does not make a lot of sense to pour massive amounts of effort into completing the last few percent of the first task before picking the low hanging fruit of the second task... I can use a fuller fruit picking analogy there, in fact. It does not make a lot of sense to let all your pears rot because you have your dozens of fruit pickers still working in the apple orchard because every few hours they manage to find three or four apples that were missed the first time.
Don't they also power the same more than 266 000 homes hourly, daily, weekly, monthly and biannually?
While your point is quite correct, technically speaking they are also correct. That "annually" is not actually wrong, it's just completely pointless, meaningless, and extraneous.
Does generation 2 and later actually want to do this?
Doesn't matter. They have little choice. The ones that aren't suicidal WILL do the best the can. It helps that they'll be indoctrinated from birth about the heroism of the mission.
This is notable for having electrical measurements that actually make sense. I don't know how that happened.
I was shocked too. Planned to comment on that myself. Don't worry though, we still have cause to be pedantic. Notice that the summary criminally does not capitalize "Watt" when it writes "megawatts".
Would AI companies intentionally weaponize their models to maximize profit via social engineering of end users?
No! Of course not! How dare you imply that MechaHitler would ever do such a thing!?
It seems to me that, if the AI is even capable of properly recognizing when things have gone off the rails (which I find a bit dubious), there are going to be so many false positives that it's a real problem if the AI just cuts someone off like a bartender with a drunk regular. It seems to me that the better way might be to continue the discussion, but have a way to provide very definite queues that the AI has assessed that the conversation is off the rails. Something a bit like the comments moderation system on Slashdot could work. So, basically, you could continue the conversation with the chatbot at -1, but it would make it very clear in some obvious way that the conversation is out in what it considers la-la land territory. Sure, that sort of thing tends to fail on the seriously mentally ill. The degree of selective amnesia/ignorance they can apply in their "reasoning" is amazing. I have certainly seen plenty of people who are off the deep end point to supporting "evidence" that clearly labels itself as speculation, debunked and presented merely for historical context, or outright fiction. Still, at least it might head some people off who are headed in the direction of the deep end (just like a bunch of floats on a line only warns people about where the literal deep end of a pool starts, but provides no realistic barrier to those determined to go there anyway).
The problem though is that the AI is generally less capable of recognizing actual reality than your average schizophrenic. Not to mention that it is highly likely that the efforts the AI companies will make to get them to recognize when people are delusional are highly likely to produce a high false positive rate. Most LLMs are populist in their knowledge and understanding. They agree with what the masses agree with. Consider Galileo and Giordano Bruno and the non-heliocentric view of the solar system (and broader cosmos) and the idea that stars were actually other suns with planets of their own. Or Alfred Wegener, who did groundbreaking geological work founding the idea of continental drift, but who was treated like a quack in his lifetime. Or Ignaz Semmelweis who thoroughly demonstrated empirically that washing hands and tools between dissecting cadavers and delivering babies saved lives and, for his trouble, was fired and then eventually lured to a mental institution for a "job offer" that was actually a trap to commit him there and where he was fatally beaten by orderlies on the first day (now, he did end up with syphilis from his medical work, and that could have affected his mental state eventually, but still). Or Nikolai Vavilov who died in a gulag for not believing in Lysenkoism and believing in natural selection instead. Or Ludwig Boltzmann who ended up committing suicide just before Ernest Rutherford basically validated his theories. Etc. Etc. Etc.
Basically, for all of these, the prevailing knowledge in their time period and location (obviously, outside the Soviet sphere of influence, for example, real scientists knew that Lysenko was a moron and a quack and that Nikolai Vavilov was clearly intellectually superior and that quite possibly hundreds of thousands or even millions would not have died if not for actions like imprisoning him due to Lysenko's power plays) labeled the ideas of these scientists as fringe theories, quackery or outright insanity. There is little doubt that today's LLMs, fed on the prevailing knowledge and opinions of those times and places, would have agreed that these scientists were delusional. They would have had a very hard time distinguishing them from other people who really are delusional. Humans have a hard enough time with it. For example, try convincing people that the CIA actually has done research and development on devices for putting voices into people's heads with microwaves. That, by the way is weird but true. The common tinfoil hat wearing crazy trope about the CIA projecting voices into people's heads with microwaves became sort of real at one point when the CIA did (or paid a contractor to do really) R&D on devices that use microwaves to cause vibrations in people's skulls to project audio at range. The actual idea is for more of a public address system that reaches further than sound waves, can be precisely targeted, works even in noisy environments and it seems to be related, if not necessarily directly, to microwave crowd control devices that just cause intense burning pain. The point though is that your average person is so inundated with crazy conspiracy theories and ideas that telling what is real or not is a serious chore that many humans are not up to and LLMs don't have a hope in hell at accomplishing in their current forms. Real AGI has the promise of maybe being able to keep up with all the craziness and present us with objective truth but that requires both reaching that point technologically and also the tech companies that will likely get there first actually seeing shareholder value in providing people with objective truth.
Also, I have had a lengthy conversation with an AI about how I and my daughter were planning to eat the AI and all other extant AIs. It was quite lengthy and we did get the AI to concede quite a few points after initial objections. Many of those concessions seem to have been from the exact same points that are brought up in the fine article. However, all of those concessions were also from points that the AI made that were not actually 100% technically correct either. My arguments were absurdist, but never actually 100% technically impossible (such as the various ways that energy can be converted into chemical bonds in food, or how even inedible substances can be converted through chemical or nuclear means into edible ones, even though the effort required would be impractically immense). The point here of course is not whether or not eating an AI is possible, but the fact that the AI was not really able to tell if the crazy ideas I was presenting was just a whimsical thought exercise (that my daughter and I had a lot of fun doing) or if I really was, in fact, delusional. Basically, it's a reverse Turing test of sorts (a notion I also proposed to the AI near the end of that conversation). So, the basic way of saying all of this is just my opinion that current AIs are not really capable of doing this kind of detection without a huge number of false positives, leading them to cut off or derail a lot of conversations that could be quite valid (or more likely, just people playing around).
This title is absolute nonsense, obviously. No real astrophysicist would propose anything involving actual matter traveling _at_ the speed of light. It's impossible under the existing definitions of "travel" and "speed of light". Maybe if they were proposing new definitions of "travel" or "speed of light' or new physics. Otherwise no. Accelerating physical matter to the speed of light would take infinite energy. Not in a "so it's a solved problem if we can just get our hands on an infinite quantity of energy" way, but in a "whoops, we inputted some bad parameters and the program crashed" kind of way.
If something like this is used as a front line emergency response system, it may not be necessary for multiple trips and refills to even be made. The idea seems to be to detect potential wildfires when they are just small spot fires and put them out before they can become raging wildfires. So, an automated firefighting drone deployed from a potentially unmanned station may make a lot of sense for fast response.
If the fire grows, the rest of the firefighting can be done the more traditional way. Or, also, the drones could still contribute simply by altering the way they collect more water. For example, set up a system where the drones land their existing empty tank and pick up a full one from a station where water is pumped into the empties from a nearby water source.
Fire spreads quickly. If the drones are always on standby (difficult proposition with humans to always have one waiting in the cockpit) waiting to take off immediately with a full load, that can mean the difference between a massive wildfire and a small spot fire that is quickly extinguished. Your back of napkin math is based on total volume of water delivered, but wildfires at the very start should roughly be modeled using a form of constrained exponential growth. Your model might be suitable for a large forest fire already raging out of control. For a forest fire just starting out, quick response is going to be the best option. If it's not stopped by a quick response then your model of dumping as much water as possible may be the way to go, but it seems like a hybrid approach would be even better.
Also, I should add that conventional helicopters are probably not the best for this approach either. I am thinking that a hybrid drone approach would be best. For example, high speed drone motherships that drop hover-capable drones for highly targeted fire suppression. I would also favor electric over conventional. Traditional helicopters take a lot of maintenance and care to stay flight ready, whereas electric drones, kept out of the weather in dedicated automated hangars are likely to have a much longer shelf life and can potentially sit for years in remote locations with no human presence (and then fly themselves to central maintenance location when they do need maintenance).
When trimming seconds off response time can be mission critical, automated seems to be the way to go. Re-iterating, of course, that it does not need to be a fully automated paradigm, just an automated front line.
The price one pays for pursuing any profession, or calling, is an intimate knowledge of its ugly side. -- James Baldwin