Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 13 declined, 9 accepted (22 total, 40.91% accepted)

Submission + - Ask Slashdot: What Happened To The Prank Apps That Used To Be Popular?

OpenSourceAllTheWay writes: Back when PCs were more boxy looking than today and people used floppy disks to store stuff there were a bunch of prank apps around that one could put on a DOS or Windows computer to annoy the hell out of siblings, classmates, coworkers and others. (Here is a listing of some older prank apps: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.prank-ideas-centra... and some more recent Android prank apps https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.androidheadlines.c... ) Some prank apps would flip the Windows desktop upside down. Some would make the mouse pointer move in strange ways or make it give you the middle finger. Some would cause you to hit the right keyboard key and still mistype a word. Some would play an audio file in the background every now and then that gave the impression of your computer making strange noises for unknown reasons, even turning the OS volume up before the sound, and then down again, making it impossible to make the sounds stop. There are many more computer users today than there were back then. Yet there doesn't seem to be much new in the way of prank apps — at least for Windows. Why is that? Did Windows 8 cause PC users to lose their humor?

Submission + - Ask Slashdot: Should Open Source Projects Hire Professional UI/UX Designers? 2

OpenSourceAllTheWay writes: There are many fantastic open source tools out there for everything from scanning documents to making interactive music to creating 3D assets for games. Many of these tools have an Achilles heel though — while the code quality is great and the tool is fully functional, the User Interface (UI) and User Experience (UX) design is typically significantly inferior to what you get in competing commercial tools. In an nutshell, with open source, the code is great, the tool is free, there is no DRM/activation/telemetry bullshit involved in using the tool, but you very often get a weak UI/UX design with the tool that — unfortunately — ultimately makes the tool far less of a joy to use daily than should be the case. A prime example would be FOSS 3D tool Blender, which is great technically, but ultimately flops on its face because of a poorly designed UI that is a decade behind commercial 3D software. So here is the question — should open source dev teams for larger FOSS projects include a professional UI/UX designer who does the UI design for the project? There are many FOSS tools that would greatly benefit from a UI re-designed by a professional UI/UX designer.

Submission + - Banksy Artwork Self-Destructs At Auction Right After Being Sold For $1.3 Million (metro.co.uk) 1

OpenSourceAllTheWay writes: Elusive street artist Banksy's famous 'Girl With Balloon' artwork was on sale at a Sothesby's auction in New York inside what looked like a normal, if slightly old fashioned painting frame. As soon as the auction concluded — the artwork was sold to a bidder for a cool $1.3 Million — a whirring noise started coming from the artwork hanging on the wall, and 'Girl With Ballon' started moving down inside its frame, coming out the bottom of the frame in shredded strips. In what must be an art world first, the artwork suddenly self-shredded in front of hundreds of stunned auction attendants. It appears that — somehow — Banksy or some other prankster installed a battery powered paper shredding mechanism in the bottom of the artwork's frame that can be remotely triggered. In a tweet on his Twitter account, Banksy posted an image of the destructed artwork and wrote 'Going, gone, gone...', potentially mocking the practice of auctioning famous artworks off for large sums of money. The question now is precisely what — if anything — the buyer of the artwork gets for his or her money, and whether 'Girl With Balloon' is worth more or less than before now.

Submission + - Researchers Create 'Sans Forgetica', A Memory And Recall Boosting Font (cnn.com)

OpenSourceAllTheWay writes: CNN.com reports on a new font that is purposely designed to aid students recall academic materials they read more easily. (Font and Chrome extension available here: http://sansforgetica.rmit/ ) Australian researchers say their new font, called Sans Forgetica, could be the tool to help people retain information. The typeface, which slants to the side and has gaps in the middle, is not easy on the eyes. But according to the team at RMIT University in Australia who conceived Sans Forgetica, it has the perfect combination of "obstruction" needed to recall information. The multidisciplinary team of typographic design specialists and psychologists said they designed Sans Forgetica using the learning principle called "desirable difficulty." The principle means that when obstruction is added to the learning process, people are required to make a little more effort and end up having better memory retention. With normal fonts "readers often glance over them and no memory trace is created," RMIT senior lecturer Janneke Blijlevens said in a statement. Conversely, if a font is too difficult, memory is not retained. "Sans Forgetica lies at a sweet spot where just enough obstruction has been added to create that memory retention," she said. To get to that sweet spot, the researchers tested various fonts with roughly 400 Australian university students in a laboratory and an online experiment "where fonts with a range of obstructions were tested to determine which led to the best memory retention," RMIT said. "Sans Forgetica broke just enough design principles without becoming too illegible and aided memory retention," RMIT said.

Submission + - Ask Slashdot: Why Does Almost Nothing Come With A Proper Printed Manual Anymore? 2

OpenSourceAllTheWay writes: As someone who grew up with 1980s and 1990s computers and electronics and still has whole boxes of lovingly prepared printed computer, peripheral, game and software manuals from that era, I am continually surprised by how just many products ship without a proper printed manual these days. Case in point would be things like Android phones. Android has quite a few not-entirely-obvious functions built into it. And a lot of people aren't even aware they exist. No Android phone I've bought has ever had a printed manual included in its little product box. Not even a small one. Even expensive laptops ranging in price from 2,000 to 5,000 Dollars often come only with a few sheets of printed paper in the box — warranty card, where to register the device, URL for downloading drivers and so on. Why is this? It can't be environmental concern — the electronics devices themselves, when thrown away, are a hundred times more harmful to the environment than a little 50 to 100 page recycled paper booklet would be. So where are the manuals? Is it the cost of preparing the manuals? The cost of printing them? Is it a few grams of extra weight added to the product box? Is everyone supposed to look up everything online now, even in places where there is no internet connection? And why can't there be a print manual option — e.g. pay 3 to 5 Dollars extra, and get a full, printed manual you can study on a couch?

Submission + - The Story Of Starlite, The "Blast Proof" Material (bbc.com)

OpenSourceAllTheWay writes: "The BBC has posted an interesting video series on 'Starlite', a white paste developed in the 1970s and 1980s by British hairdresser Maurice Ward that could completely insulate any object it coated, like a raw egg or a piece of cardboard, against extreme heat sources — even acetylene torches, nuclear blasts and lasers capable of heating an object to 10,000 degrees Celsius. Anything Starlite paste was smeared on could withstand extreme heat exposure without the coated object melting or combusting or heating at all in the process. The heat-proof paste got a lot of attention around the world when it was demonstrated on the BBC's Tomorrow's World TV program in 1990. Ward was an eccentric inventor — not a classically trained scientist — who came up with the formula for Starlite by experimenting wildly with different substances. He got the initial idea for Starlite when he was burning garbage in his backyard one day and one particular piece of garbage simply would not burn at all. Ward thought that Starlite would be worth Billions when commercialized. He let NASA and other scientists test Starlite — it did work as advertised — but never allowed anyone to retain a sample of the substance, fearing that it could be reverse engineered. Starlite never was commercialized properly, and Ward died in 2011 without making the Millions or Billions he had imagined he would. Sadly, Ward took the chemical formula for Starlite to his grave with him. To this day, nobody knows the exact chemical composition of Starlite, or how one might go about recreating the substance."

Submission + - Ask Slashdot: What Is The Latest And Greatest In Computer Graphics Research?

OpenSourceAllTheWay writes: In the world of 2D and 3D Visual Content Creation, new tricks that ship with commercial 2D or 3D software are almost always advertised as "fantastically innovative". But when you do some digging as to who precisely invented the new "trick" or "method" and when, you often find that it was first pioneered many many years ago by some little known computer graphics researcher(s) at a university somewhere. Recently a flashy new 3D VR software released in 2018 was actually based around a 3D calculation method first patented almost 10 years ago. Sometimes you even find that the latest computer graphics software tricks go back to little-known computer graphics research papers published anywhere from 15 to 25 years ago. So the question: What, in mid-2018, is the latest and greatest in 2D or 3D computer graphics research? And which academic/scientific publications or journals should one follow to keep abreast of the latest in computer graphics research?

Submission + - A Middle-Aged Writer's Quest To Start Learning To Code For The First Time (1843magazine.com)

OpenSourceAllTheWay writes: The Economist's 1843 magazine details one middle-aged writer's (Andrew Smith) quest to learn to code for the first time, after becoming interested in the to him "alien" logic mechanisms that power completely new phenomena like crypto-currency and effectively make the modern world function in the 21st Century. The writer discovers that there are over 1,700 actively used computer programming languages to choose from, and that every programmer that he asks "Where should someone like me start with coding?" contradicts the next in his or her recommendation. One seasoned programmer tells him that programmers discussing what language is best is the equivalent of watching "religious wars". The writer is stunned by how many of these languages were created by unpaid individuals who often built them for "glory and the hell of it". He is also amazed by how many people help each other with coding problems on the internet every day, and the computer programmer culture that non-technical people are oblivious of. Eventually the writer finds a chart of the most popular programming languages online, and discovers that these are Python, Javascript and C++. The syntax of each of these languages looks indecipherable to him. The writer, with some help, and online tutorials then learns how to write a basic Python program that looks for keywords in a Twitter feed. The article is interesting in that it shows what the "alien world of coding" looks like to people who are not already computer nerds and in fact know very little about how computer software works. There are many interesting observations on coding/computing culture in the article, seen through the lens of someone who is not a computer nerd and who has not spent the last 2 decades hanging out on Slashdot or Stackoverflow.

Submission + - Ask Slashdot: Could Asimov's Three Laws Of Robotics Ensure Safe AI? (wikipedia.org) 2

OpenSourceAllTheWay writes: There is much screaming lately about possible dangers to humanity posed by Artificial Intelligence that gets smarter and smarter and more capable and might — at some point — even decide that humans are a problem for the planet. But some seminal science-fiction works mulled such scenarios long before even 8-Bit home computers entered our lives, and Isaac Asimov's Robot stories in particular often revolved around Laws Of Robotics that robots were supposed to follow so as not to harm humans. The famous Three Laws Of Robotics from Wikipedia:

        A robot may not injure a human being or, through inaction, allow a human being to come to harm.
        A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
        A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

So here's the question — if science-fiction has already explored the issue of humans and intelligent robots or AI co-existing in various ways, isn't there a lot to be learned from these literary works? If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions "So 20th Century" that AI builders won't even consider learning from their work?

Slashdot Top Deals

The devil finds work for idle circuits to do.

Working...