Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
User Journal

Journal Journal: Ask Slashdot: Incentives to prevent software problems

After the recent CrowdStrike debacle, I got to thinking about software failures and culpability and such, and was wondering if there was a way to incentivize better software practices.

I'm thinking mostly big security breaches, where someone breaks into a system and downloads the personal information of millions of users, or ransomware attacks where an entire hospital system is taken offline, but also straight up software failures such as a Y2K problem that takes an airline offline for days. That sort of thing.

Let's imagine a US federal government agency whose job it is to incentivize best practices in software.

Yeah, I know... another federal agency. Efficiency and incompetence and all that. But read on.

The agency publishes a document outlining a set of software "best practices" that are known to help prevent security breaches. I'm thinking something of the scale of ISO-9000 series for quality of manufacturing. Call it the ISO-Software standard.

The point of the standard is this: it's optional. Being ISO-Software compliant is just a stamp granted by the agency that tells customers that the software has been vetted and known to abide by the best practices. It's like the USDA stamp on beef: it lets you know the product has been inspected and that it is safe, wholesome, and correctly labeled. Like the USDA stamp, it doesn't tell you that you won't get sick, only that it comes about from best practices.

The agency gives out the stamp but doesn't actually do the compliance audit. That's left up to professionals and companies in the industry that meet the agency's standards. Something on the order of the FAA's "DER" program (Designated Engineer Responsible). The FAA doesn't have the technical expertise to tell if the electronics of an altimeter complies with the standards, so the company making the device hires one or more the DER's to sign off on the technical details.

Here's how the stamp works:

1) You get accredited by the agency after a compliance audit. You can then advertise your product as ISO-Software compliant.

2) If a data breech happens and your software is at fault, you are protected from liability. You followed all the correct procedures, and either something was missed or a new form of breech technique was invented.

2a) If something was missed, the agency can look at the compliance audit and take appropriate action. DERs with a lot of breeches under their belt can be dropped from the program.
2b) If it's a new type of breech, the agency can analyze the incident in the manner of the NTSB analyzes a plane crash. They update the standards to account for the new techniques.

3) If you release an update to your software, you're still considered compliant. Even if you make major changes and/or rewrite the code, you still retain the stamp.

4) If you have a breech on an updated version of the software:
4a) If the breech exists in the original software, you're protected. You retain the stamp, and have no liability for the breech.
4b) It the breech arose because of changes in your software, you immediately lose the stamp *and* can be held liable for the breech. If you want to be compliant you have to go through the entire certification process again.

5) The software package can use compliant and non-compliant libraries.
5a) If the breech came from a compliant library, then it's their problem.
5b) If the breech came from a non-compliant library, then it's your fault. You should have tested their library more fully.

As mentioned, this type of compliance stamp is completely optional. It won't interfere with open source projects or labor-of-love packages on GitHub, and it shouldn't affect anything that has a license with a disclaimer.

But the stamp would be a strong selling point for commercial software, and that would incentivize companies to do compliance audits and implement best practices. It would incentivize CEOs and sales departments to spend money making their software safer.

As to the actual compliance requirements, we can start by taking all the data breeches we know about and putting them into categories, then writing general rules that would have prevented the breech. Such as sanitizing user input, or checking for thrown pointers, that sort of thing.

(And of note: The FAA has lots of completely useless requirements, things like having a design document or code coverage, that are not needed in software. All ISO-Software requirements should be a direct consequence of some breech that happened in the past, phrased in general terms that would have prevented that breech.)

User Journal

Journal Journal: Ask slashdot: What are some good AI regulations? 2

There's been a lot of discussion about regulating AI in the news recently, including Sam Altman going before a Senate committee begging for regulation.

So far I've seen only calls for regulation, but not suggestions on what those regulations should be. Since Slashdot is largely populated with experts in various fields (software, medicine, law, &c), maybe we should begin this discussion. And note that if we don't create the reasonable rules, congress (mostly 80-year old white men with conflicts of interest) will do it for us.

What are some good AI regulation suggestions?

I'll start:

A human (and specifically, not an AI system) must be responsible for any medical treatment or diagnosis. If an AI suggests a diagnosis or medical treatment, there must be buy-in from a human who believes the decision is correct, and who would be held responsible in the same manner as a doctor not using AI. The AI must be a tool used by, and not a substitute for, human decisions.

This would avoid problems with humans ignoring their responsibility, relying on the software, and causing harm through negligence. Doctors can use AI to (for example) diagnose cancer, but it will be the doctor's diagnosis, and not the AI's.

What other suggestions do people have?

User Journal

Journal Journal: How to conduct an interview

The recent article Job Interviews are a Nightmare provoked a few commonsense tactics that people should use while interviewing.

It occurred to me that there's no corresponding common sense for the interviewer, so I thought I'd put some notes down about that.

How to interview someone: Scan through their resume, find something they did that you know something about, and ask them a pointed question about that topic. Make the question challenging or controversial if possible: ask about an uncommon feature that only an insider would know the answer to, or ask them to explain something complicated about it or something that you don't know the answer to, or something that's not exactly black-and-white.

People like talking about themselves, it puts them at ease, and by watching them you can tell whether they present themselves as arrogant, friendly, knowledgeable, have a sense of humor, and so on. Just sit back and let their responses flow over you and get a feeling about the person. Could you work with him? Does he know how things work? Can he explain complicated things? Would he give a solid presentation? Did he get along with others? (That's a big one.)

On purpose, ask a question he doesn't know the answer to (a technical question related to your own product, for example). Can he say "I don't know"? Follow it up with "if you worked here and I asked that question, what would be your next step?" See if he knows to look on the net, if he would ask co-workers, of go to the library, or E-mail an old professor, or generally if he has skills to find out what he doesn't know. If he's lost and doesn't know how to proceed, that's a datapoint (may not be a problem for an intern, but a big problem for senior software engineer).

I had one applicant who worked on the electronics for GFI systems so I asked if I installed a GFI in my home, would it protect the other outlets on the circuit (the answer is yes, it would protect outlets further out from the breaker box). Applicant didn't know, but thought that it probably wouldn't - doesn't know the fundamental theory of the thing he claims to have worked on, probably not a good fit at my company.

Another applicant had worked on emacs, asked whether a line-based buffer or a char-based buffer would be a better software solution for an editor (in the sense of adding text in the middle of a block of characters), and got a complete list of plusses and minuses of both styles with references of other software packages that used both types... yup, an actual expert in his field. And he comes across as a little bit Aspergers, which is not a problem and his personality would fit well with the others here.

Stay away from any interview questions you find on the internet, and in particular don't google interview questions that are listed as good to ask. These are worthless, won't tell you anything, and note that the applicant *himself* can google these and their best answers. You could just as well E-mail these, they could look up the answers and E-mail them back.

Put the applicant at ease by asking him to explain something he did in the past, try to challenge him a little with your question, and use your own feeling of his behaviour to see if he's someone you'd like to work with.

User Journal

Journal Journal: How to conduct an interview

The recent article Job Interviews are a Nightmare provoked a few commonsense tactics that people should use while interviewing.

It occurred to me that there's no corresponding common sense for the interviewer, so I thought I'd put some notes down about that.

How to interview someone: Scan through their resume, find something they did that you know something about, and ask them a pointed question about that topic. Make the question challenging or controversial if possible: ask about an uncommon feature that only an insider would know the answer to, or ask them to explain something complicated about it or something that you don't know the answer to, or something that's not exactly black-and-white.

People like talking about themselves, it puts them at ease, and by watching them you can tell whether they present themselves as arrogant, friendly, knowledgeable, have a sense of humor, and so on. Just sit back and let their responses flow over you and get a feeling about the person. Could you work with him? Does he know how things work? Can he explain complicated things? Would he give a solid presentation? Did he get along with others? (That's a big one.)

On purpose, ask a question he doesn't know the answer to (a technical question related to your own product, for example). Can he say "I don't know"? Follow it up with "if you worked here and I asked that question, what would be your next step?" See if he knows to look on the net, if he would ask co-workers, of go to the library, or E-mail an old professor, or generally if he has skills to find out what he doesn't know. If he's lost and doesn't know how to proceed, that's a datapoint (may not be a problem for an intern, but a big problem for senior software engineer).

I had one applicant who worked on the electronics for GFI systems so I asked if I installed a GFI in my home, would it protect the other outlets on the circuit (the answer is yes, it would protect outlets further out from the breaker box). Applicant didn't know, but thought that it probably wouldn't - doesn't know the fundamental theory of the thing he claims to have worked on, probably not a good fit at my company.

Another applicant had worked on emacs, asked whether a line-based buffer or a char-based buffer would be a better software solution for an editor (in the sense of adding text in the middle of a block of characters), and got a complete list of plusses and minuses of both styles with references of other software packages that used both types... yup, an actual expert in his field. And he comes across as a little bit Aspergers, which is not a problem and his personality would fit well with the others here.

Stay away from any interview questions you find on the internet, and in particular don't google interview questions that are listed as good to ask. These are worthless, won't tell you anything, and note that the applicant *himself* can google these and their best answers. You could just as well E-mail these, they could look up the answers and E-mail them back.

Put the applicant at ease by asking him to explain something he did in the past, try to challenge him a little with your question, and use your own feeling of his behaviour to see if he's someone you'd like to work with.

User Journal

Journal Journal: Observations of the Arizona vote audit 1

I've been watching the Arizona vote audit with interest of late. I have *not* researched the process, on purpose, and have just been looking at the audit itself and some of the high-level news titles that say what's going on. I have some observations about security and the process in general.

I'm impressed with the process. I describe what I saw are below, but note that *everything* is recorded in a transparent way. The audit was livestreamed, so that anyone on the internet could view the process, ballots were read and verified by at least 3 human readers (under camera), the votes were photographed (both sides) and scanned, and lots of paperwork was recorded and kept.

The process gives us a scientific tool for sorting out election claims.

For example, one claim is that votes were (illegally) printed in China, shipped in, and added to the count. This is an hypothesis, it might be true and it might be false, but it's not completely crazy and we can test for it: paper from China contains bamboo, this should show up on a UV scan, and all votes were UV scanned. Done and done.

Another claim is that the tallies of the audit are not accurate due to partisan involvement. This is Russel's Teapot, and it's not even wrong: it makes no claims of *what* happened, or *where* in the process this could have happened, or *how*. It's not testable in any way. To make Russel's teapot claim, you also need to explain how the teapot got there in the first place.

We have video evidence for moving boxes of votes from secure storage, for the people reading votes, for scans of the votes read, and we have the physical votes themselves: where in this process did the adjustments happen? There is abundant traceability in the process: if you claim particular defect, you have to explain how it got through the redundancy checks, why it's not visible on the video feed, and how several people didn't notice it (but you did).

And note that if there *is* any discrepancy between vote tallies and people believe it's partisan, we still have the original votes and can turn the entire kit over to the other side and let *them* go through the physical votes looking for problems. Hopefully with the same level of transparency and recording.

It's really quite satisfying to know that there is this level of election integrity, and I hope other states use this process - or even improve on it.

There's a crisis of confidence in our elections right now, and a strongly secure and open audit will go a long way towards calming any unrest.

Notes on the actual process:

The audit focuses on Ballots, registration, and counting machines.

Machines:

Counting machines are being forensically audited, and I don't know much about this. There's been some leaks, but I can't tell if they come from official auditors or anyone else, so I'm going to wait until the results are published. Other forensic analyses (of vote counting machines) resulted in a report containing an abstract, a multi-page list of conclusions, and a many pages of "this is what we did, this is what we found". I expect something like that to come from the Audit.

If they're doing things right, they took an image of a counting machine disk and analyzed that; which means: they left the original machine unmodified, they have numerous *other* machines completely untouched, and any claims can be verified by examining an untouched machine.

Regardless of the conclusions, I'm confident the machine audit will be verifiable by skeptics from both sides.

Registration:

Registration is being checked in a computer/db kind of way, and I didn't get much info on this. Arizona claims to be going through the database looking for "obvious" problems (their word) such as 80 people registered from the same address, people registered at blank lots, voters who also have a death certificate, and so on.

Of note, there is no attempt to determine whether the mail-in ballots were signed (on the back, required by law), and there is no attempt to match the signatures with known signatures on file such as driver's license. (NB: I may have this wrong, or this could change later, or something similar.)

I like that and I think it's a good move: the purpose of the audit *isn't* to disenfranchise voters, and if people forget to sign the back of the form it's a system error and not a user error. Also, my own signature changes day-to-day, so signature matching isn't a valid check.

Ballots:

Ballots are read by a 5-person team sitting at a round table with a lazy-suzan in the middle (think Chinese restaurant table with lazy-susan). One person places a ballot on the carousel and slowly spins it to face 3 people to read and make marks on a count sheet, then the last person takes the ballot off and places it in a pile. This takes a couple of seconds per ballot, but sometimes you can see the carousel pause before a reader, she leans forward and adjusts her glasses, then nods to the dealer and the ballot proceeds (probably from verbal commands). One dealer, 3 readers, one taker.

The dealers I saw wore gloves, and I think this is a good idea. No blotches or ink-stains from dirty fingers, and if false ballots are detected we might be able to get fingerprints. The readers never touch the ballots: I saw some of the duplicate ballots slide off (longer ballots with a fold at one end) and the readers pushed back from the table and and allowed the dealer to make adjustments. (Duplicate ballots come from special votes: braille, large-text, E-mailed votes from overseas are transferred to a "duplicate ballot" for machine entry. Also, damaged ballots are sometimes duplicated.)

Ballots were read in groups (I counted 50 ballots for one run), then tallies were made, the tally sheets handed in, everything was dated and signed, and the tallies are kept in envelopes glued to the sides of the ballot boxes. If there's any question, we can match specific ballots with the individual dealers/readers and the time-stamped video stream to check what happened.

In a separate section, ballots are also photographed front-and-back, and the images are saved. Anyone could see the photographing stations and watch ballots placed and processed on the live stream.

Ballots are also scanned somehow, but I couldn't quite tell what this was from the video. I read that they are high-res scanned, and by UV looking for broken fibers from the fold and from the marks. If the marks were made by a pen-like instrument there should be broken fibers, but a photocopied mark won't have these. If there is no fold, then the mail-in ballot wasn't actually mailed.

Also, apparently the checkbox marks are checked for randomness: marks with exactly the same shape indicate a machine process.

The ballot scans will not be released except by court permission. I think this is because the mail-in ballots contain the voter information (name, address) and there were a lot of these this election. The database with the signatures obscured (or removed) would make a good database for AI research, so I hope this eventually happens.

User Journal

Journal Journal: Insightful Coronavirus discussion 2

Slashdot has many Coronavirus stories and lots of comments... but very little insight. Mostly it's all insults - one side to the other.

Can we have some rational discussion about the pandemic in terms of the shutdown? We used to do that here. I know there are medical experts, economists, lawyers, researchers, and other smart people here - and I don't think those are the ones being jerks.

I'll start (I identify as a math/physics guy):

Take a look at the coronavirus statistics and scroll down below the wall of numbers to the first graph, and note that the function has become linear for the last 5 weeks or so.

If you take 30,000 as the daily coronavirus cases (slope of the linear portion, rough estimate from "Daily New Cases" further into the link) and note that there have been 1.3 million cases so far, this means that at the current rate it will take approximately 10,000 days for coronavirus to infect the US population.

Social Distancing and the shutdown turned the exponential rise into a linear one.

The original two reasons for the shutdown were that A) we weren't prepared, and B) the pandemic threatened to overwhelm our hospital capacity. Well, we're now prepared, and more accurate/recent predictions show that hospitals won't be overloaded. NY and California seem to be the only places at risk, we have two mobile Navy hospitals on standby, and have built overflow hospitals in places where needed. Among many other preparations.

Given the enormous impact of the shutdown, doesn't it make sense to reopen?

Specifically, since social distancing seems to work so well, it makes sense to ease off on some restrictions until the pandemic resumes exponential rise, then moderate the speed of that rise ("flatten the curve") so that hospitals aren't overwhelmed.

The theory being that everyone will be getting the virus anyway, the shutdown is wildly destructive, and the best way to navigate between those two evils is to allow the pandemic to run its course in a way that minimizes the economic damage, but doesn't result in unnecessary deaths due to hospital overflow.

This assumes vaccinations won't be available in time to help.

What insights can you bring to the discussion?

Any economists, medical experts, lawyers, or other smart people want to chip in?

User Journal

Journal Journal: Is metamoderation gone? 1

Today I submitted a story suggestion noting that it's now November 2019 - the timeframe of the events of Blade Runner - linking an article comparing how the predictions of the story and movie line up with current technology...

...which was quickly downmodded as spam.

Slashdot has a problem, which is that people are targeting and methodically downmodding specific slashdot users based on their views. I suspect that some people own hundreds of accounts, and scrape the site looking for particular users, and probably phrases as well. When something comes up, they access whichever account has mod points and shut down that user or viewpoint.

I wonder if there are organizations dedicated to methodically doing this across the internet. In 2016 a group (now disbanded) "correct the record" was formed to confront social media users who posted unflattering comments about Hillary Clinton. I'm sure there was a similar group stumping for Trump that I couldn't find in a quick search.

I haven't seen an invite to meta-moderate in ages, and I think this is the problem that meta-moderation addressed. If someone unjustifiably down-votes a comment, it would eventually get reviewed by other users, and after repeated abuse that user wouldn't get moderation points.

Was meta-moderation removed?

Flagging a submission as spam is particularly insidious because your account can be locked for submitting spam articles! There's no warning - after a number of flagged submissions, you're banned. I don't want my account locked, so I will stop submitting stories to Slashdot.

Is scaring people away like this good for slashdot?

This is also a reflection of the larger issue about abuse on the internet, with ties to fake news and political lies. Someone responding to an insightful post with abusive language and insults will tend to drive that user away, and could be legitimately criticized as wrong, unfair, against the TOS or decorum of the forum, and should be discouraged. But what happens when someone unjustifiably downvotes a comment? That's not seen as abusive or insulting, but it can be disspiriting. The arguments against abuse apply to unfair moderation as well.

Right now society is discussing the effects of foreign influence on elections. Could some foreign hacker group promote a candidate in the upcoming elections to gain unfair advantage? There's more at stake here than just scaring away quality posts on slashdot - a lot more.

As one of the premier tech sites on the internet, maybe slashdot should be on the forefront of coming up with solutions to some of these problems. Meta-moderation would be a good first step, and at the very least it would experimentally test the idea.

If slashdot implements a policy that works, it could serve as a model for other sites. Slashdot could contribute to fixing what is perhaps the biggest problem facing the internet right now.

Was meta-moderation removed from slashdot?

User Journal

Journal Journal: I got banned! (Slashdot mods are awesome!) 2

I was banned from Slashdot for about a day.

If too many of a user's submissions get modded "spam", the Slashdot system automatically bans the account. I've been posting a lot of political stories, and someone(s) went and marked them all as "spam", and I got banned.

You can check my submissions yourself to see if you think they are spam.

If you think they *are* spam or otherwise inappropriate, I'd like to hear about it. I don't want to get banned again, and in any event I'm happy to behave more in keeping with what people want.

Kudos to Slashdot mod Logan who actually received and addressed my feedback about this!

Imagine - a corporate mod actually received and read a complaint, looked over the account, and unbanned it!

Totally blew my expectations.

Slashdot rocks.

User Journal

Journal Journal: New E-mail

Another E-mail that will get to me is:

SourceForge (at) OkianWarrior (dot) com

User Journal

Journal Journal: Non-popularity of Open Source

Apropos my recent post outlining why open source is not very popular.

I've spent some time researching useability, both in computer software and other areas.

The post was necessarily brief - it only outlined 5 general trends and was light on context, explanation, and supporting examples.

A better treatment would explain all the trends that I see (perhaps a dozen) with more explicit explanations for each. Unfortunately that's not appropriate for a blog post [Slashdot] comment.

User Journal

Journal Journal: Contact info 1

I've just now discovered that I've got fans.

Contact info:

niroz (dot) 9 (dot) okianwarrior (at) spamgourmet.com

Slashdot Top Deals

I attribute my success to intelligence, guts, determination, honesty, ambition, and having enough money to buy people with those qualities.

Working...