Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment They do use Linux. (Score 5, Informative) 627

I've worked for some of the largest banks in the world, and:
1.) They use craploads of Linux.
2.) They're going to stop using Windows.
3.) They'll never use dropbox.

Detail:

1.) They use craploads of Linux.

Just about every bank has declared Linux to be the future for application services, with a few exceptions for specific applications. Accounting will stay mainframe for a very long time, Collaboration will remain MSExchange for a very long time, Sharepoint probably as well, and rinky-dink one-off applications may still run only on Windows servers, but only if those apps come from software shops built by math/business/commerce geeks (algo stuff, etc.). Most databases, report generation, records keeping, document management, webbanking backends, and other banking stuff will continue their current trend of UNIX-to-Linux. Some banks are 20% along their UNIX-to-Linux projects, some are at 80%, but I don't know any that aren't on that road.

I think you were talking about desktops, though, not the datacenters and server farms. That's a very superficial way to look at banking computing. Banks do not use Windows machines to do banking, they use Windows machines as desktops for running Exchange, and Office, and banks are thrilled that they can *also* use those same pieces of hardware as dumbterms for people to SSH/Telnet to some banking applications and also access the newer applications through the browser. But, if it wasn't for Exchange and Office, they wouldn't use Windows, they'd use Linux thin clients. I actually know one bank that's trying to migrate people to Google Apps for just this reason, but it's really hard, because bankers really do love office/exchange.

2.) They're going to stop using Windows.

But they're not going to go to Linux. The banks are all calling it "BYOD" for "Bring Your Own Device." Bankers really, really, really want to use Mac desktops and iPads and Android phones and ditch Windows -- but there's no way they'll switch to Linux on the desktop unless that Linux is called Android. So, the banks are currently running well-funded projects to replace all their Windows-desktop-only applications with web-based apps that'll work from any browser, and also throwing lots of money at companies like Good Technology to be able to get iPads and Android Tablets in to the workplace.

Microsoft is trying to use Office360 or WTF it's called so that they can still sell stuff to banks that have ditched Windows on the desktop, but there's going to be lots of turmoil over the next 5-10 years as that progresses. Windows on the desktop in banks is effectively dead already -- I know 3 banks that have decided to stick with XP on the desktop instead of upgrading to Win7 because the Win7 upgrade costs are better spent in moving faster to this better future.

3.) They'll never use dropbox.

Banks are required to log everything, and logging everything you upload to dropbox and everyone that downloads it and all of that crap is so expensive that you should find out what the approved tools are for doing what you want to do. Most banks will allow SFTP/SCP between trusted endpoints if the right people sign the right forms. In my experience, dropbox is only ever requested in banks by someone that wants to break the law and is too stupid to know what law they'd be breaking.

Dropbox blocking is not something IT decided to do, it's something the lawyers required IT to do, and it has nothing to do with "security" in the way that there are "security" differences between operating systems. It has to do with the kind of security you have in the lobby that would ask questions if you started walking out the door with canvas bags that have dollar signs on them. If the banks allowed dropbox, naughty employees would copy documents to home that their daytrader spouses would use for insider trading (seen that more than once).

Comment Re:Finally! An interesting question. (Score 1) 153

Perl is for anyone, sure, but it's certainly not for everyone.

If the author and the user are the same singular individual, and always will be, then sure, perl can be a fun toy and a timesaver.

But if you're mentoring junior sysadmins as part of your succession plan in a collaborative and evolving ecosystem, Perl is pretty much the worst choice available.

Comment Re:Finally! An interesting question. (Score 1) 153

If you want to "copy" junk with its metajunk from an ext3 filesystem on to a FAT32 filesystem, remember that you can always create an 8GB file with dd from /dev/zero, run mkfs.ext3 against that file, and then mount that file as an ext3 filesystem thanks to the loopback adapter. You won't be able to read that junk from a Windows machine, but you probably won't care, and if you create an 8GB file on a 16GB FAT32 flash disk, you'll still have 8GB of space available for use in Windows -- and Windows will be able to copy the 8GB filesystem file and stuff.

Someone else's explanation: http://nst.sourceforge.net/nst/docs/user/ch04s04.html

Comment Finally! An interesting question. (Score 5, Insightful) 153

First, ignore the people who encourage you not to try, and who point you in other directions. Sure, there are much better ways of doing this, but who cares? The whole point is that you should be able to do whatever you want -- and actually doing this is going to leave you _so_ much smarter, trust me.

Some douche criticized you for not knowing beforehand why hard links wouldn't work. . . . because, you know, you should have been born knowing everything about filesystems. To hell with him, sally forth on your journey of discovery, this can be hella fun and you'll get an awesome feeling of accomplishment.

First off, you're going to have trouble using rsync with the flash drive, because I assume your constraint is that you can't fit everything on the flash drive, it's only big enough to hold the differences.

Next, come to terms with the fact that you'll need to do some shell scripting. Maybe more than just some, maybe a lot, but you can do it.

I'd recommend cutting your hard drive in two -- through partitions or whatever -- to make sure that "system" is fully segmented from "data." No sense wasting all your time and effort getting backups of /proc/ and /dev/, or, hell, even /bin/ and /usr/. Those things aren't supposed to change all that much, so get your backups of /home/ and /var/ and /etc/ working first. Running system updates on the road is rarely worth it, and will be the least of your concerns if you end up needing to recover.

Next, remind yourself how rsync was originally intended to work at a high level. It takes checksums of chunks of files to see which chunks have changed, and only transfers the changed chunks over the wire in order to minimize network use. Only over time did it evolve to take on more tasks -- but you're not using it for its intended purpose to begin with, since you're not using any network here. So rsync might not have to be your solution while travelling unless you start rsyncing to a personal cloud or something -- but its first principles are definitely a help as you come up with your own design.

The premise is that, while travelling, you need to know exactly what files have changed since your last full backup, and you need to store those changes on the flash drive so that you can apply the changes to a system restored from the full backup you left at home. You won't be able to do a full restore while in the field, and you won't be able to roll back mistakes made without going home, but I don't think either of those constraints would surprise you too much, you likely came to terms with them already.

So, when doing the full backup at home, also store a full path/file listing with file timestamps and MD5 or CRC or TLA checksums either on your laptop or on the flash disk, preferably both.

Then, when running a "backup" in the field, have your shell script generate that same report again, and compare it against the report you made with the last full backup. If the script detects a new file, it should copy that file to the flash disk. If the script detects a changed timestamp, or a changed checksum, it should also copy over the file. When storing files on the flash disk, the script should create directories as necessary to preserve paths of changed/new files.

For bonus points, if the script detects a deleted file, it should add it to a list of files to be deleted. For extra bonus points, it should store file permissions and ownerships in its logfiles as replayable commands.

The script would do a terrible job at being "efficient" for renamed files, but same is true for rsync, so whatevs.

I built a very similar set of scripts for managing VMWare master disk images and diff files about ten years ago, and it took me two 7hr days of scripting/testing/documenting -- this should be a similar effort for a 10-yr-younger me. I learned *so* much in doing that back then that I'm jealous of the fun that you'll have in doing this.

Of course, document the hell out of your work. Post it on sourceforge or something, GPL it, put it on your resume.

Comment Re:Mandatory sleep information (Score 1) 307

The rule in this case was that the flight crew was supposed to notify the cabin crew of the start and end of any naps so they can act as a secondary check against going over the time limits -- that required notification didn't take place on this flight. There were several sleep-related breeches of airline policy cited in the report.

http://www.tsb.gc.ca/eng/rapports-reports/aviation/2011/a11f0012/a11f0012.asp

Comment Re:radar... (Score 5, Informative) 307

Lots of facts wrong. . .

First Officer woke up. Captain said "hey, sleepyhead, you see that Air Force cargo plane coming towards us that the TCAS is telling us about?" First Officer points, "That thing?", "No, that's Venus, the Air Force cargo is lower." "Oh. Ah! It's coming right at me!" (Dives instinctively)

All within a couple of seconds after waking from 75 minutes of REM sleep in his chair, groggy as hell.

http://www.tsb.gc.ca/eng/rapports-reports/aviation/2011/a11f0012/a11f0012.asp

Comment Please read the actual report. (Score 5, Informative) 307

Please, please, please -- there are tons of very well-considered safety points in the real report, and the linked articles are very very very wrong.

http://www.tsb.gc.ca/eng/rapports-reports/aviation/2011/a11f0012/a11f0012.asp

To quote:

At 0155, the captain made a mandatory position report with the Shanwick Oceanic control centre. This aroused the FO. The FO had rested for 75 minutes but reported not feeling altogether well. Coincidentally, an opposite–direction United States Air Force Boeing C–17 at 34 000 feet appeared as a traffic alert and collision avoidance system (TCAS) target on the navigational display (ND). The captain apprised the FO of this traffic.

Over the next minute or so, the captain adjusted the map scale on the ND in order to view the TCAS target 5 and occasionally looked out the forward windscreen to acquire the aircraft visually. The FO initially mistook the planet Venus for an aircraft but the captain advised again that the target was at the 12 o'clock position and 1000 feet below. The captain of ACA878 and the oncoming aircraft crew flashed their landing lights. The FO continued to scan visually for the aircraft. When the FO saw the oncoming aircraft, the FO interpreted its position as being above and descending towards them. The FO reacted to the perceived imminent collision by pushing forward on the control column. The captain, who was monitoring TCAS target on the ND, observed the control column moving forward and the altimeter beginning to show a decrease in altitude. The captain immediately disconnected the autopilot and pulled back on the control column to regain altitude. It was at this time the oncoming aircraft passed beneath ACA878. The TCAS did not produce a traffic or resolution advisory.

Comment Duh (Score 5, Insightful) 469

There was no self-publishing, it was not a platform, not an infrastructure, it was a centralized service that didn't interact with similar services from competitors.

Connect-from-home services like these popped up *all the time* in the 70s, 80s and early 90s from cable companies, newspapers, telcos and similar -- but they all died because they were all walled gardens designed to keep out the competitors of their parent companies.

The only services that thrived were the ones that had no parent companies with business models to protect -- AOL and Compuserve -- which died off when they connected themselves to the government/academic internet thingy and real competition started.

What's interesting is how many of these walled gardens evolved from voice-based IVR systems hosted by major newspapers in the 70s-90s where you could dial up and listen to your horroscope, sports, movie showtimes, etc. over the phone. Those systems got more and more and more complex over time, and if you carried a wallet-card of numbers and keypad commands, you could access a world of information from payphones or borrowed landlines while you were on the go! For a small monthly fee, you could get a voicemail box that you could check while you were on the go if you wanted to stay reachable but couldn't afford a pager.

Comment The answer you need to show your boss (Score 4, Informative) 84

Right here, pure gold: http://www.gartner.com/it/page.jsp?id=1400813

Read that 5 times, carefully, and then get your bosses to do the same. Seriously.

SAS70 is a *questionnaire* that the vendor completes, and then the auditors just go in and confirm that their answers are correct.

So I could say "we don't do backups" in my answer to the questionnaire, the auditors would verify that I didn't do backups, and I'd "complete" the SAS70 process (not a certification!) successfully.

It is the client that is resoponsible for reviewing the questionnaire and ensuring that the audited answers are sufficient for the needs of their business. That's called "vendor management" and is a core practice area in ITIL.

Comment Easy, start by choosing what you need to change (Score 2) 315

- Percentage of staff with ITILv3 foundations certification: zero.

I know this because of your question. Watch thes videos as a start: http://pmit.pl/en/it-management/free-itil-v3-course-collection-of-itil-v3-moviesdarmowe-szkolenie-itil-v3-zbior-filmikow-o-itil-v3/ -- and sell a formal training course to your management.

The people joking about how one of you is getting fired, or you're all getting outsourced. . . probably true. Learning ITIL is all about learning what's important to your business stakeholders, how to monitor/measure these things, and how to make sure you're always making the right decisions based on the business priorities.

  If you can't convince them to pony up for you three to take the certification course, then pay for it out of your own pocket, you'll need it to find a new job.

Comment Take the advice of professional research. (Score 1) 551

Most universities offer a 200-level course in industrial/organizational psychology, and it was, by far, the best thing I could have done for my management career. http://en.wikipedia.org/wiki/Industrial_and_organizational_psychology

First, it doesn't matter what kind of employees you have, they're all unique individuals. Packard's 12-item list is nothing but a sound-bite introduction to the concept of a http://en.wikipedia.org/wiki/Psychological_contract

Don't look at them as a bunch of older employees. Don't look at them as "technical," and yourself as "business." That's pigeonholing, and will do more damage than it will help. You are people, they are people, and winning relationships will come only from striving to understand the relationships.

Slashdot Top Deals

People are always available for work in the past tense.

Working...