Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Will this make glowing watched cheaper? (Score 1) 36

Was tritium production really a concern in the first place? From what I've seen there's plenty of tritium produced in heavy water fission reactors like Candu: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

Several articles I found via Google noted that there's currently only about 20kg of Tritium in the world. Much more will apparently be needed for fusion reactors, and on a continuing basis.

Comment Re:But not in the US (Score 4, Insightful) 200

In fact, it is UNETHICAL to use a placebo control in any clinical trial of an investigational product for which the existing standard of care already includes a product on the market.

In plain English, it is entirely unethical to give participants a placebo to test the efficacy of a new flu vaccine when we already have existing vaccines on the market. Doing so denies participants in the study from accessing effective treatment. If you have to test against a placebo, it will be impossible to recruit participants, because nobody will take the chance to receive placebo when they could just go to the pharmacy and get vaccinated.

There are only two possible explanations for such a position: either gross ignorance of basic scientific and ethical principles for conducting medical research in humans, or deliberate malicious intent to stop all research of investigational drugs. It doesn't actually matter which one is the reason. Both are entirely unacceptable.

The fact that a huge segment of the American population does not understand even the most basic scientific principles is the reason why many people will die needlessly.

Comment Privately held; indie devs (Score 2) 43

nah, this will soon come crashing down as the enshitification of commercial games continues.

Valve itself is NOT publicly traded. There are no shareholders to whom the value needs to be shifted.
This explains (in parts) why Valve has been a little bit less shitty than most other companies.
It also means Valve's own product (Steam, SteamDeck, upcoming Deckard, etc.) are slightly less likely to be enshitified
(e.g.: whereas most corporations try to shove AI in any of their product, the only news you'll see regarding Vavle and AI is Valve making it mandatory to label games that uses AI-generated assets)

if i really want a game i wait until the price seems reasonable and affordable even if that means waiting for years, the side benefits are there's more content, most of the bugs are squashed and the drama is history, it seems unethical to support classist corporations in any fashion especially financially in my view

Also indie games are a thing.
Indie-centric platform like itch.io are a thing.
Unlike Sony and Microsoft, Valve isn't selling the Steamdeck at a loss, so they care less where you buy your games from -- hence the support for non-Steam software (the onboarding even includes fetching a browser flatpak from FlatHub).

Humble bundles are also a thing (with donation to charities in addition to lower prices).

So there are ways beyond "buy a rushed-to-market 'quadruple A' game designed-by-comitee at some faceless megacorp".

Comment Heuristic (Score 1) 48

It's expected. At their core all chess algo are search a min-max tree, but instead of going width- or depth- first exhaustive search, they use heuristics to prioritize some branches of the tree (A-star).

On the modest hardware of the older machine there isn't that much you can explore before the player gets bored waiting.
So obviously, you're going to make much stringent rules: "Never take a branch where you lose a piece" prunes entire swaths of the tree, rather than "see if sacrificing peice XXX gives us a better path" which would require exploring more of the tree.

Having been trained on all the corporation could scrape from the internet, I would expect an LLM to have been trained from a lot of real-world chess games (e.g.: reports of games from online archives of chess magazines, etc.), so as long as the tokeniser is able to parse the notation found there (or at least distinguish the moves. It doesn't really need to be able to "understand" the notation into english), it has a large corpus of "lists of moves leading to a win" which would include real moves (sacrifices as you mention).

And given a large-enough model to encode that, would be in the same ballpark of the hidden Markov models of yore -- provided it keeps track of the current state of the game (which it currently does not).

Comment Devil's in the detail. (Score 1) 48

I wonder if you wouldn't win if you just told ChatGPT to write an chess AI and then used the chess AI to beat the Atari. Writing code is something text models are good for. Playing chess is not.

The devil is in the detail.
All chess algorithms are A-star: they search a min-max tree, but use heuristic to prioritize some branches instead of doing width- or depth- frist.
Generatingn a template of a standard chess algo would be probably easy for a chatbot (these are prominently featured in tons of "introduction to machine learning" courses that training the LLM could have ingested), writing the heurisitc function to guide the A-star search is more an art-form and is probably where the chat bot is going to derail.

Funnily though, I would expect that if you used the chatbot AS the heuristic it wouldn't be a super bad player.
Have some classic chess software that keep tracs of the board and lists all the possible legal moves, then prompt the chatbot with something like:
"This is the current chessboard: {state}, these at the last few moves: {history}, pick the most promising among the following: {list of legal move}".

In fact, decades ago that's how some people have applied hidden Markov models to chess.

Similarly, I would expect that during training, the LLM would have been exposed to a large amount of all games available only, and has some vague idea of what a "winning move" looks like given a current context.

Not much trying to simulate moves ahead, as rather leveraging "experience" to know what's best next for a context, exactly like the "chess engine+HMM" did it in the past, but a lot less efficient.

Comment Context window (Score 1) 48

I've had ChatGPT forget the current state of things with other stuff too. I asked it to do some web code, and it kept forgetting what state the files were in. I hear that some are better like Claude with access to a repo, but with ChatGPT even if you give it the current file as an attachment it often just ignores it and carries on blindly.

Yup, they currently have very limited context windows.

And it's also a case of "wrong tool for the wrong job". Keeping track of very large code bases is well within the range of much simpler software (e.g.: the thing that powers the "autosuggest" function of your IDE which is fed from a database of all functions/variables/etc. names of the entire database).
For code, you would need such an exhaustive tool to give the list of possible suggestion and then the language model to only predict which from the pre-filtered list, rather free-styling it.

For chess you would need to have a special "chess-mode" training that is trained to always dump the current content of the board and the list of most recent turns' moves in the scratchpad between each turn, so that the current state doesn't fall out of the context. Best would be to do it like people did with HMMs a long time ago: have a simple actual chess software keep track of the board and generate a list of all possible next legal moves, and use the Markov model to predict from that pre-filtered list(*).

(*): That could be doable currently with a piece of software that automatically generates a prompt "The current status of board is: {board description}. The last few moves where: {history}. Chose the best move from: {list of legal moves}".

Comment Already done with markov chains (Score 1) 48

I know it scanned and consumed like.. all of the great Chess games ever played. It can only predict the next word, or move.

...and this has been already demonstrated eons ago using hidden Markov models.
(can't manage to find the website with the exact example I had in mind, but it's by the same guy who had fun feeding both Alice in Wonderland and the bible into a Markov model and use it to predict/generate funny walls of text).

That seems like the nature of LLM's. If I ever can coax ChatGTP to play a whole chess game.. I will let you know the results.

The only limitation of both old models like HHM and the current chatbots is that they don't have a concept of the state of the chess board.

Back in that example, the dev used a simple chess software to keep track of the moves and the board and generate a list of possible next moves, then uses the HMM on that pre-filtered list to predict the next best.

Nowadays, you would need the chat both at least have a "chess mode" where it dumps the state of the board into its scratch pad, along a list of the most recent moves by each play, so that it always has the entire game in context.

Otherwise they should both do roughly the same thing (try to predict the next move out of a model that has been fed all the history of Chess games ever played), but with insane levels of added inefficiency in the case of the chatbot.

Comment Check the title: Norway (Score 1) 224

Tire particulate.

Check the /. item's title: It's Norway we're speaking about.
i.e.: a rich European country.

So a country with a not to shabby public transport network.
Thus compared to, say, the USA: it's already doing quite some efforts toward shifting traffic away from personal vehicles and toward much more efficient transportation systems that have a lot less problems per passenger than private vehicles.

Public transport is the best solution to reduce travelling-related pollution, but can't cover 100% of cases.
EV are a "good enough solution" at reducing problems caused by people who *MUST* and *CANNOT avoid* driving cars.

Submission + - UK Scientists Achieve First Commercial Tritium Production (interestingengineering.com)

fahrbot-bot writes: Interesting Engineering is reporting that Astral Systems, a UK-based private commercial fusion company, in collaboration with the University of Bristol, has claimed to have become the first firm to successfully breed tritium, a vital fusion fuel, using its own operational fusion reactor.

The milestone came during a 55-hour Deuterium-Deuterium (DD) fusion irradiation campaign conducted in March. Scientists from Astral Systems and the University of Bristol produced and detected tritium in real-time from an experimental lithium breeder blanket within Astral’s multi-state fusion reactors.

“There’s a global race to find new ways to develop more tritium than what exists in today’s world [currently about 20kg] – a huge barrier is bringing fusion energy to reality,” said Talmon Firestone, CEO and co-founder of Astral Systems.

Astral Systems’ approach uses its Multi-State Fusion (MSF) technology. The company states this will commercialize fusion power with better performance, efficiency, and lower costs than traditional reactors.

A core innovation is lattice confinement fusion (LCF), a concept first discovered by NASA in 2020. This allows Astral’s reactor to achieve solid-state fuel densities 400 million times higher than those in plasma.

The company’s reactors are designed to induce two distinct fusion reactions simultaneously from a single power input, with fusion occurring in both plasma and a solid-state lattice.

The reactor core also features an electron-screened environment. This design reduces the energy needed to overcome the Coulomb barrier between particles, which lowers required fusion temperatures by several million degrees and allows for higher performance in a compact size.

Submission + - UK Scientists Achieve First Commercial Tritium Production (interestingengineering.com)

fahrbot-bot writes: Interesting Engineering is reporting that Astral Systems, a UK-based private commercial fusion company, in collaboration with the University of Bristol, has claimed to have become the first firm to successfully breed tritium, a vital fusion fuel, using its own operational fusion reactor.

The milestone came during a 55-hour Deuterium-Deuterium (DD) fusion irradiation campaign conducted in March. Scientists from Astral Systems and the University of Bristol produced and detected tritium in real-time from an experimental lithium breeder blanket within Astral’s multi-state fusion reactors.

“There’s a global race to find new ways to develop more tritium than what exists in today’s world [currently about 20kg] – a huge barrier is bringing fusion energy to reality,” said Talmon Firestone, CEO and co-founder of Astral Systems.

Astral Systems’ approach uses its Multi-State Fusion (MSF) technology. The company states this will commercialize fusion power with better performance, efficiency, and lower costs than traditional reactors.

A core innovation is lattice confinement fusion (LCF), a concept first discovered by NASA in 2020. This allows Astral’s reactor to achieve solid-state fuel densities 400 million times higher than those in plasma.

The company’s reactors are designed to induce two distinct fusion reactions simultaneously from a single power input, with fusion occurring in both plasma and a solid-state lattice.

The reactor core also features an electron-screened environment. This design reduces the energy needed to overcome the Coulomb barrier between particles, which lowers required fusion temperatures by several million degrees and allows for higher performance in a compact size.

Submission + - Wells Fargo scandal pushed customers toward fintech says UC Davis study (nerds.xyz)

BrianFagioli writes: A new academic study has found that the 2016 Wells Fargo scandal pushed many consumers toward fintech lenders instead of traditional banks. The research, published in the Journal of Financial Economics, suggests that it was a lack of trust rather than interest rates or fees that drove this behavioral shift. For someone like me, who spent over a decade working at an online bank, the results are both fascinating and familiar.

Conducted by Keer Yang, an assistant professor at the UC Davis Graduate School of Management, the study looked closely at what happened after the Wells Fargo fraud erupted into national headlines. Bank employees were caught creating millions of unauthorized accounts to meet unrealistic sales goals. The company faced $3 billion in penalties and a massive public backlash.

Yang analyzed Google Trends data, Gallup polls, media coverage, and financial transaction datasets to draw a clear conclusion. In geographic areas with a strong Wells Fargo presence, consumers became measurably more likely to take out mortgages through fintech lenders. This change occurred even though loan costs were nearly identical between traditional banks and digital lenders.

In other words, it was not about money. It was about trust.

That simple fact hits hard. When big institutions lose public confidence, people do not just complain. They start moving their money elsewhere. According to the study, fintech mortgage use increased from just 2 percent of the market in 2010 to 8 percent in 2016. In regions more heavily exposed to the Wells Fargo brand, fintech adoption rose an additional 4 percent compared to areas with less exposure.

Yang writes, âoeTherefore it is trust, not the interest rate, that affects the borrowerâ(TM)s probability of choosing a fintech lender.â

This is not just an interesting financial tidbit. It is a real example of how misconduct from a large corporation can help drive the adoption of new technology. And in this case, that technology was already waiting in the wings. Digital lending platforms offered a smoother experience, often with fewer gatekeepers.

Notably, while customers may have been more willing to switch mortgage providers, they were less likely to move their deposits. Yang attributes that to FDIC insurance, which gives consumers a sense of security regardless of the bankâ(TM)s reputation.

This study also gives weight to something many of us already suspected. People are not necessarily drawn to fintech because it is cheaper. They are drawn to it because they feel burned by the traditional system and want a fresh start with something that seems more modern and less manipulative.

The lesson is clear. Trust is not just a soft concept. It is a measurable force that shapes where people put their money and how they interact with financial technology.

With the fintech space now more crowded than ever, this research is a reminder that reputation matters. So does transparency. As consumers grow more educated and more cynical, the winners will be the platforms that make trust a top priority.

The Wells Fargo mess may have helped kickstart a digital migration. But if those new platforms repeat the same mistakes, users will move again.

No, folks, this is not just about mortgages. It is about every service that asks for your private data or financial info. And yes, that includes AI tools, cloud storage providers, and social networks too.

Slashdot Top Deals

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...