Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Submission + - SPAM: Nuclear Should Be Considered Part of Clean Energy Standard, White House Says

An anonymous reader writes: More details have emerged about the climate and energy priorities of President Joe Biden’s infrastructure plan, and they include support for nuclear power and carbon capture with sequestration (CCS). In a press conference yesterday with reporters, White House climate adviser Gina McCarthy said the administration would seek to implement a clean energy standard that would encourage utilities to use greener power sources. She added that both nuclear and CCS would be included in the administration’s desired portfolio. The clean energy standard adds a climate dimension to the Biden administration’s recently announced infrastructure plan, seeking to put the US on a path to eliminating carbon pollution.

“We think a CES is appropriate and advisable, and we think the industry itself sees it as one of the most flexible and most effective tools,” McCarthy told reporters. “The CES is going to be fairly robust and it is going to be inclusive." McCarthy did not provide details about how far a CES would go in supporting nuclear power. It’s possible that the policy may only cover plants that are currently operating, but it may also extend to include new plants. The former is more likely than the latter, though, given the challenges and costs involved in building new nuclear capacity.

Link to Original Source

Comment Re:Multiverse (Score 1) 209

So if there are an infinite amount of universes that were make during the big bang, they they would still be getting made right? otherwise it would be finite.

That's not how infinities work. There are an uncountably infinite number of real numbers between 0 and 1, but that doesn't mean that new real numbers are constantly being produced. An infinite number of universes existing right now is a very different thing than a finite number existing right now, but more being created without end. Moreover, an infinite number of universes (or numbers, or anything) does not in any way imply (or dis-imply) some kind of unbounded process of creation. You can even have an infinite number of universes, with a finite rate of universes being destroyed, and will always still have an infinite number of universes in existence. Infinity is a state, not a process.

Of course, getting back to the subject at hand, whether physics supports the idea of infinite universes existing is a different matter altogether.

Comment Re:What? It's not nearly that bad. (Score 4, Interesting) 277

That is not correct. Capital and economic value is being destroyed constantly, and therefore money is being lost if we're not generating economic value at at least replacement rate.

People are burning otherwise potentially productive days/weeks/months of their all-to finite lives doing nothing productive. Other, less fortunate people are outright dying earlier than they otherwise would have for various reasons (not just the virus itself, but also to otherwise-treatable medical issues that overwhelmed healthcare systems cannot now address).

Productive capacity is being lost. An empty seat on an airplane, or a flight not made at all, are examples of economic value that is lost forever. The same is largely true for services not rendered. Other things like a lost season of college basketball represent value irrevocably lost as well.

Manufacturing facilities that otherwise run at output capacity lose whatever production capacity they had for the duration they're shut down--this is especially true in industries where technological obsolescence is rapid. Each day the California Tesla plant is down represents a number of cars that will now never be made.

All of the above represents money being destroyed/consumed without the typical value/money-creating activities creating replacement/replacement+ value. The examples above are not just theoretical thought exercises--actual hard currency is effectively destroyed in all of the above processes. Central banks can paper over the problem by issuing more currency, but the more they do that without a commensurate increase in real value in the economy represents a devaluation of all of the currency in circulation.

All of the above is not meant to discount the need to take dramatic steps to deal with the pandemic, and I'm not trying to argue that the steps being taken are not worth it (or that they are necessarily sufficient). As you note, letting the virus run rampant results in an entirely different sequence of destruction. Really, a global pandemic is like the economic equivalent of a hot nuclear war and subsequent nuclear winter. The choice for society and policymakers is which tradeoffs you want to make while dealing the situation.

Comment Hardware Compatibility (Score 1) 966

On Windows, I can buy whatever mouse/keyboard/monitor/CPU/GPU/sound card/USB headphones/USB drive/etc. that I want, plug it into my PC, and it will work correctly 90% of the time. In fact, most of the time I won't even have to think about it. The other 10% of the time, I'll need to go to Windows Update (or, in very rare cases, the manufacturer's website) to get the driver... which will install itself completely automatically.

On Linux, I have to carefully research which hardware works and which doesn't; which config files I need to edit by hand; and -- if I'm feeling adventurous -- which kernel flags I need to unset to get it all to work. If I'm very lucky, my new hardware will work in some capacity; it will almost never be 100% functional, but maybe I can get it to the point where it's good enough.

This is a massive problem for everyone, but especially for gamers, who absolutely must have their GPU, monitor, audio, and network working correctly and at peak performance. This is also absolutely not a problem for ML developers, network admins, etc., who operate on clusters of 1,000 machines and couldn't care less about all of the peripherals. Guess what, though... there are way more regular users out there than AI network admins.

Comment Re: MapReduce is great (Score 1) 150

Removable media (e.g. tape and WORM optical disk) libraries were typical for petabyte+ storage arrays back in the late 90s. I remember the Subaru telescope facility in Hawaii had a petabyte storage facility which was primarily an automated tape library (plus a large section of wall occupied by a physically massive ~40gb RAM array) when I very briefly interned* there in the late 90s.

That was large, but not uniquely or ridiculously large. My WAG is that, globally, there were probably on the order of 1k installations of similar or greater magnitude at the time. Certainly some of the DOD projects at LLNL would easily have been at the scale your parent poster claims.

*I.e. assisted with workstation builds & linux installs.

Comment Re:This is ridiculous... (Score 1) 337

What you say is perhaps the way it should be, but not precisely true either. The reality is that the DOJ has an absolutely incredible degree of leeway and discretion in terms of its enforcement agenda and priorities.

This is nothing nefarious, just the inevitable outcome of the confluence of the following factors:

  • The sheer number of laws on the books
  • The overwhelming number of potential law enforcement actions possible as a result of the above
  • The limited resources of the DOJ
  • The reasonable expectation that the DOJ exercise judgment in enforcement actions (e.g. murder cases are higher priority than shoplifting cases, which might not even be pursued at all) and not use a totally unbiased but stupid approach like a strictly ordered queue ordered by filing date

Note that the DOJ having discretion is not the same as a single, all-seeing mastermind at the DOJ directing traffic on all possible investigations and enforcement actions. In many cases it is the emergent outcome of policies developed to translate law into rules, discretion exercised by bureaucrats at various points in the process, etc. that ultimately decides the path of a case.

It might also be the result of purely political policy decisions by the members of the executive branch or others with the ability to influence enforcement policy. Many people are only now beginning to understand the extent and magnitude of this factor as you see the absolutely stark contrast between the Obama administration's policies and those of the Trump administration. Policy decisions by the executive branch can decide whether or not you get deported, whether or not states condoning recreational marijuana sales get busted by the FBI/DEA/ATF etc., and how aggressively (if at all) DOJ pursues certain types of potential violations (e.g. ADA & IDEA violations) through very direct and transparent means like executive orders, or more indirectly through methods like withholding funding or authorizations required to staff an enforcement division.

"What the DOJ cares about is irrelevant" might be the way it should work in theory, but not how it works in practice. There was nothing inevitable about the DOJ handling the case the way it did (indeed, nothing inevitable about them handling the case at all--if not outright de-prioritizing the case, many of these issues are resolved differently based on the highly variable inclination and ability of DOJ personnel to convince parties to go through ADR/mediation rather than issuing an order or proceeding through litigation). To be clear, I'm not necessarily saying the DOJ mishandled the issue or made misjudgments, I'm just pointing out that they did in fact exercise judgment (it's just the way these things work in practice).

Comment Re:Why? (Score 2) 238

It seems like you're assuming that the sphere is pumped full of air when the water is drained out, but that is not a necessity. In fact, doing so would needlessly complicate the design significantly as it is scaled to greater depths while simultaneously compromising its power generation potential by reducing the pressure differential between the interior and exterior of the sphere.

What your're looking at is more like an implosion of a ~14,000m^3 vacuum chamber which might not even be obviously noticeable from the surface when the sphere is placed at greater depths.

Biotech

Fraud Detected In Science Research That Suggested GMO Crops Were Harmful (nature.com) 357

An anonymous reader writes: Three science papers that had suggested that genetically modified crops were harmful to animals and have been used by activist groups to argue for their ban have been found to contain manipulated and possibly falsified data. Nature reports: "Papers that describe harmful effects to animals fed genetically modified (GM) crops are under scrutiny for alleged data manipulation. The leaked findings of an ongoing investigation at the University of Naples in Italy suggest that images in the papers may have been intentionally altered. The leader of the lab that carried out the work there says that there is no substance to this claim. The papers' findings run counter to those of numerous safety tests carried out by food and drug agencies around the world, which indicate that there are no dangers associated with eating GM food. But the work has been widely cited on anti-GM websites — and results of the experiments that the papers describe were referenced in an Italian Senate hearing last July on whether the country should allow cultivation of safety-approved GM crops. 'The case is very important also because these papers have been used politically in the debate on GM crops,' says Italian senator Elena Cattaneo, a neuroscientist at the University of Milan whose concerns about the work triggered the investigation.

Comment Re:Why? (Score 1) 55

No, there really isn't any excuse for using raw inline SQL given the existence and ubiquity of parameterized query APIs. They provide all of the flexibility of raw SQL but with guaranteed proper escaping of value text and thus no SQL injection vulnerability (bugs in the API implementation notwithstanding).

Biotech

Majority of EU Nations Seek Opt-Out From Growing GM Crops 330

schwit1 writes: Nineteen EU member states have requested opt-outs for all or part of their territory from cultivation of a Monsanto genetically-modified crop, which is authorized to be grown in the European Union, the European Commission said on Sunday. Under a law signed in March, individual countries can seek exclusion from any approval request for genetically modified cultivation across the 28-nation EU. The law was introduced to end years of stalemate as genetically modified crops divide opinion in Europe. The requests are for opt-outs from the approval of Monsanto's GM maize MON 810, the only crop commercially cultivated in the European Union, or for pending applications, of which there are eight so far, the Commission said.
Biotech

Scotland To Ban GM Crops 361

An anonymous reader writes: Scotland's rural affairs minister has announced the country will ban the growing of genetically modified crops. He said, "I am concerned that allowing GM crops to be grown in Scotland would damage our clean and green brand, thereby gambling with the future of our £14 billion food and drink sector." Many Scottish farmers disapprove of the ban, pointing out that competing farms in nearby England face no such restriction. "The hope was to have open discussion and allow science to show the pros and cons for all of us to understand either the potential benefits or potential downsides. What we have now is that our competitors will get any benefits and we have to try and compete. It is rather naïve."

Comment Re:It's a Great Learning Experience (Score 1) 226

I suspect there's a bit of a definition issue at play here (with all fault apparently being on my end, given some of the other comments in this discussion). In my mind, DevOps roles are such only if the "Dev" and "Ops" parts are connected--I.e., you manage operations for the software that you've developed. I agree that there are rapidly diminishing or negative returns otherwise. E.g., if you write some Nodejs web services on Monday and troubleshoot MS Exchange/ActiveDirectory integration issues on Tuesday, there isn't much benefit. In that case, however, I'd argue that you don't have a DevOps role, you just have 2 different unrelated roles (which, as I stated, is apparently a definition issue on my part).

The only part that I would argue with is..

The difference is between developers knowing the operations side and being the operations side.

You cannot, in my opinion, "know" the operations side if you have never actually been the operations side. The real question is whether knowing the operations side is worth the effort of being the operations side (at least for a while). In my experience, the answer is unequivocally "yes" (but again, with the caveat that you are the operations side only for the software that you develop, and not for, e.g., rolling out the latest Windows service pack to all users at your location).

I should also clarify that my experience has only been with internal development. The demographic differences with respect to external-facing applications (i.e., user/developer ratios on the order of possibly millions to 1 vs. 10s or 100s to 1), among other things, would necessarily limit the ability of developers to participate in operations.

As you've noted, having to run operations to the exclusion of all development activity would bore you to tears. What that has done is forced me to consider--to a degree and precision that would never have occurred to me previously--how the design and architecture of a proposed solution impacts deployment and operations. Because I did not want to spend all my time supporting the system I mentioned in my previous post, I designed it such that it required about all of 30 minutes every other month to administer, and was easy as hell to troubleshoot in production. This meant a much more complex design, and more difficulty in implementation, but saved me a ton of time on net balance such that I could still spend the vast majority of my time doing more interesting stuff.

If deploying and administering the software that you've developed becomes your full-time occupation to the exclusion of all other activity, then either:

  1. You do not actually understand deployment and administration in the relevant environment(s), and are therefore horribly inefficient at it (and would benefit greatly from learning).
  2. Your design made it very difficult/time-consuming to deploy and/or administer. This is almost an inevitable outcome if the above is true, but can also occur if the developer has a "not my job/problem" attitude when it comes to deployment and administration, or can be a straight-up deliberate trade-off based on available resources.
  3. Both of the above. Or...
  4. You are working at a scale or in a domain for which deployment and administration is an inherently difficult problem independent of solution design (though paradoxically in this case it is usually even more important for the developer to understand Ops, because while there may be little they can do to make the hard problem easier, there are lots of ways they can inadvertently make the hard problem impossible).

Comment It's a Great Learning Experience (Score 4, Interesting) 226

I essentially have this kind of role within my organization. I design, develop, deploy, and support small to mid-tier systems (e.g., the planning system for a $XXXmio/yr global department, with 300+ direct users) while being one of my own customers, as I am actually a business planner (by role) as opposed to developer. I develop systems as a way to do my "day job" much more effectively. Typical tech stack would be Excel UIs, PostgreSQL data store, and whatever else I need in the middle (e.g., nodejs, tomcat, redis, whatever).

What I've found is that, in general, doing the right thing the "right way" is not worth the cost compared to doing the right thing the "wrong way". By definition, in either scenario, the right things is getting done. What most pure developers utterly fail to understand is that in trying to do the former, there is an overwhelming tendency to do the wrong thing the right way instead.

This is because, as Fred Brooks pointed out long ago--and as the "lean startup" movement is re-discovering today--for any non-trivial novel problem you cannot know in advance what the "right thing" is until you've actually tried to implement a solution. Brooks stated this understanding as the need to throw away the first try, and the lean startup movement is essentially defined by a corollary--you have to figure out how to try cheaply enough that you can afford to throw it away and try again (and again, and again if necessary), while progressively elaborating a robust definition of what the "right thing" looks like by using those iterations as experiments to test hypotheses about what the "right thing" is. Doing things the "right way" usually costs so much in time if not capital that you simply can't afford to throw away the first try and start over, or you cannot complete enough iterations to learn enough about the problem.

Now, I'm not saying that you should be totally ignorant of software engineering best practices, design patterns, etc. What I am saying is that there is a limit to how effective you can be in reality if you live purely within the development silo. Having a "DevOps" role (granted, self-imposed in my case) has been one of the best things that's ever happened to me as far as making me a better developer, right up there with the standard oldies like writing your own recursive descent parser and compiler.

In short, it is commonly-accepted wisdom among programmers (for good reason!) that you are more effective if you actually understand the technology stack down to the bare metal or as close to it as you can manage (even if only in abstract-but-helpfully-illustrative examples like Knuth's MMIX VM), and that this understanding can only be gained via practice. It should be obvious that the same is true in the other conceptual direction through deployment and end use.

Slashdot Top Deals

I just asked myself... what would John DeLorean do? -- Raoul Duke

Working...