Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 10 declined, 4 accepted (14 total, 28.57% accepted)

Submission + - Is "learning to code" good for everyone. (bbc.co.uk) 1

Chrisq writes: The BBC has an article What can we do to get more women into coding?, where a journalist decides to " find out how easy it would be for a woman in her 30s, to learn to code in Python.".

Now some of the issues that she describes are something I would never have considered an issue, for example:

The adult class was challenging — you had to really want to learn to code in order to stay engaged.
If you make mistakes in your code, it just doesn't do anything. But when it works, there's not much pay-off — just some lines on a screen.

and

I also found the step change from learning Scratch [a Children's programming language] to Python similarly jarring in the children's toys — you suddenly go from colourful blocks to an empty screen with no handholding.
So, what could help bridge this gap from fun games for kids, to more professional level complex coding?

For most programmers the payback is what is happening; your program might just print one number, for example the nth factorial or prime. I don't see this as a man/woman thing, just a way of looking at things, I work with a female programmer who is quite happy to work on back-end systems that only produce a line of JSON as output or even only change database entries.

Is this idea that there should be a course that will teach any woman (or man) to be a productive programmer wrong? Or am I looking only at a certain type of programmer, and people who value pretty feedback over getting processes working could pursue a career as developers?

Submission + - The UK's new aircraft carrier's systems run windows XP - vulnerable to cyber att (telegraph.co.uk)

Chrisq writes: Fears have been raised that Britain’s largest ever warship could be vulnerable to cyber attacks after it emerged it appears to be running the outdated Microsoft Windows XP.

A defence source told The telegraph that some of the on-boar hardware and software "would have been good in 2004" when the carrier was designed, "but now seems rather antiquated".

Submission + - It's not just cars that have defeat devices! (theguardian.com)

Chrisq writes: An EU study has fond that in real word use many electronic devices and appliances use more energy in real world conditions than in the standard EU tests. Often the real world figures are double those in the ratings.

Sometimes this is achieved by having various optional features switched off during the test; Switching on modern TV features such as “ultra-high definition” and “high-dynamic range” in real-world test cycles boosted energy use in four out of seven televisions surveyed – one by more than 100%.

However some appliances appear to have "defeat devices" built in, with some Samsung TVs appearing to recognise the standard testing clip:

“The Swedish Energy Agency’s Testlab has come across televisions that clearly recognise the standard film (IEC) used for testing,” says the letter, which the Guardian has seen. “These displays immediately lower their energy use by adjusting the brightness of the display when the standard film is being run. This is a way of avoiding the market surveillance authorities and should be addressed by the commission.”

Submission + - How do you assess the status of an open source project?

Chrisq writes: Our software landscape includes a number of open source components, and we currently assume that these components will follow the same life-cycle as commercial products, they will have a beta or test phase, a supported phase, and finally reach the end of life. In fact a clear statement that support is ended is unusual. The statement by Apache that Struts 1 has reached end of life is almost unique. What we usually find is:
  • Projects that appear are obviously inactive, having had no updates for years
  • Projects that obviously not going to be used in any new deployments because the standard language, library, or platform now has the capability built in
  • Projects that are rapidly losing developers to some more-trendy alternative project
  • Projects who's status is unclear, with some releases and statements in the forums that they are "definitely alive", but which seem to have lost direction or momentum.
  • Projects that have had no updates but are highly stable and do what is necessary, but are risky because they may not interoperate with future upgrades to other components.

By the treating Open Source in the same way as commercial software we only start registering risks when there is an official announcement. We have no metric we can use to accurately gauge the state of an Open Source component, but there are a number of components that we have a "bad feeling" about.

Are there any standard ways of assessing the status of an open source project? Do you use the same stages for Open source as commercial components? How do you incorporate these in a software landscape to indicate at-risk components and dependencies?

Slashdot Top Deals

Like punning, programming is a play on words.

Working...