Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:SUAFO (Score 0) 68

Fornite slimyness for ripping off pubg/emotes/etc, your argument doesn't really have anything to do with what they are saying. Epic is an app store so they are not in a position to block other app stores from being installed on your device like Apple does. On Windows, you can have the Microsoft app store, Steam, Epic, etc or just download programs from the internet. You can't do that on an iPhone.

Secondarily they do allow 3rd party payment integration, e.g. you can install a game from an indy dev and they are not required to use Epic for in game purchases. This is their other complaint with Apple that has been ruled on, but maybe not followed to the spirit on Apple's side.

So in a nutshell, they are saying it is unfair that if you want to be on iOS, you MUST pay apple 30% of everything you do, from the initial sale to the in-app purchases, even things like paying your subscription to Netflix. On the Epic side, you could release a "demo" version of a game for free, with an in-game purchase to unlock the rest of it using a 3rd party payment processor, and Epic wouldn't see a penny of the sale. Now if that is sustainable for a business is another question.

Whether you think Apple is right to demand payment for maintaining infrastructure and a desirable platform or you think Epic is right for trying to get them to open it up so that developers can retain more of their revenue is really where the argument is, not whether Fortnite is a slimy exploitative business model, since it is not an app store selling programs but instead it is just a single program that may or may not have come from an app store, so of course they are only going to sell their own content.

So it is unfortunate that the predatory business model of Fortnite is overshadowing the app store stuff since both can be true, Fortnite can be an exploitative cash cow that has a negative influence on the industry and customer sentiment, and they can also be fighting for arguably positive changes to help return revenue to developers instead of store front owners.

Comment Confused... (Score 3, Insightful) 72

I'm confused, maybe someone can help clear this up for me:

Amazon plans to increase the ratio of individual contributors to managers by 15% by March-end,

and

"The way to get ahead at Amazon is not to go accumulate a giant team and fiefdom,"

seem to be conflicting statements, am I misunderstanding something here? Wouldn't less managers to non-managers by definition mean that managers would have more people they are managing?

Comment Re:Bold strategy, let's see if that pays off (Score 1) 192

I was going to post something similar to this, if they can argue that downloading copyrighted materials is legal but sharing is infringing doesn't that mean that the last 2 decades of lawsuits against residential customers by the *AA wouldn't be valid any more?

I'm too lazy to look it up but I recall one of the *AAs using the IP addresses of users downloading from a torrent to try and force ISPs to identify the customers so they could sue them. I however don't recall that seeding was a requirement, just having downloaded the files.

So then if Facebook wins, and someone puts up an archive of or copyrighted material in a country that doesn't care about US copyrights, it would be totally legal to use such a service? Like someone in China could just put up all the movies ever released in the US, charge $20/month, and US customers would be completely on the legal side of the law?

Comment Re:The 80s (Score 1) 46

I'm not sure if you're complaining about waiting or something else with all the phone and bullet proof glass stuff, but we just renewed a couple passports a month ago and it took about 20 minutes, including filling out all the paper work and getting pictures. True, they did not hand us the passports when we walked out but they mailed them to us and they arrived the following week.

Honestly confused about the negative tone of the post... Is my experience out of the ordinary or is there something I don't understand about the the hand filling it out stuff, or something else?

Comment Re:It's about time (Score 1) 62

It has always been my understanding that .local is for mDNS not regular DNS, e.g. it is in the same class as link local addresses like 169.254.0.0/16, which is why it's part of things like bonjour and zeroconf for cases where there isn't actual infrastructure and clients need to auto configure. That would be different than private ip addresses, e.g. the 10.0. 0.0/8, 172.16. 0.0/12 and 192.168. 0.0/16 subnets. It is my understanding that the .internal tld is for the latter case, not the former, and that .local is primarily for the former. Does anyone have any more information or clarification context? Even the referenced RFC says multicast DNS.

Comment Re:Explanation (Score 1) 252

3. It needs some kind of middle layer so that you can move applications between displays, and displays between consoles. Think something like screen or tmux. Once you launch an app on a display, it is stuck there.

I know I'm late to the party, but you can do this using xpra. Still works around the x display idea though, so you can't attach/detach individual windows, but you start them attached to xpra then you attach your x display to xpra. So very much like screen, but I think tmux has some more advanced functions to move windows between tmux sessions.

Comment Re:Beware Google's penchant for auto-updates... (Score 5, Informative) 197

The OP might not be completely wrong, according to a dpkg-query -L google-chrome-beta it installs some stuff to /etc/cron.daily/google-chrome which apparently adds an extra source to your apt sources then updates google chrome based on some settings in your /etc/default/google-chrome. It also adds the source to /etc/apt/sources.list.d. Seems a bit invasive to me.

Comment Been wanting something like this for a long time (Score 1) 386

A lot of times these days I use rsync to do hard linked backups, which works mostly well but has some shortcomings. For example, backups across multiple machines don't have their duplicate files hardlinked, and files that are mostly similar can't be hard linked, such as files that grow like log files. More specifically we have some database files that grow with yearly detail information and everything before the newly added records is identical, resulting in gigs of used up space every day during backups when maybe a few megs has changed.

Initially I liked the way BackupPC handled the situation by pooling and compressing all the files, and duplicate files from different backups were automatically linked together. So I wrote a little script that primarily duplicated the the functionality of hardlinking duplicate files together regardless of file stat, running on top of fusecompress to get the compression too. The problem mostly is time though to crawl thousands and thousands of files and relink them. On top of that, rsync will not use those duplicate files for hardlinks in the next backup if the file stat info doesn't match, like mtime/owner/etc which means the next backup contains fresh new copies of files that have to be re-hardlinked by crawling the files again. Plus you don't get elimination of partial file redundancy.

So I looked around some more for a system that would allow you to compress out redundant blocks, and the closest thing I could find is squashfs, but it's read-only. Which sucks because we need to purge daily local backups occasionally to make more room for newer backups. We keep the last 6 month of daily backups available on a server, and do daily offsite backups from that. So once a month we delete the oldest months backups from the local backup server, and using squashfs you'd have to recreate the whole squash archive, which would suck for a terabyte archive with millions of files in it.

At this point I knew what features I wanted but couldn't find anything that did it yet, so I went ahead and wrote a fuse daemon in python that handles block-level deduplication and compression at the same time. I'm still playing around with it and testing different storage ideas, it's available in git if anyone wants to take a look, you can get it by doing:

git clone http://git.hoopajoo.net/projects/fusearchive.git fusearchive

(note the above command might be mangled because of the auto-linking in slashdot, there should be no [hoopajoo.net] in the actual clone command)

Currently it uses a storage directory with 2 sub directories, store/ and tree/. Inside tree/ are files that contain a hash that identifies the block list for the file contents. This way 2 identical files will only consume the size of a hash on disk + inodes. The hash points the the block that contains the file data block list, which is also a list of hashes of the data. This way any files that have identical blocks (on a block boundry) will have the redundant blocks only take up the size of the hash. Blocks are currently 5M, which can be tuned, and the blocks are compressed using zlib. So a bunch of small files get the benefit of compression and entire-file deduplication while large growing files will at most use up an extra block or data + the hash info for the rest of the file. So far this seems to be working pretty well, the biggest issues I have is tracking block references so we can free the block when it's no longer referenced by any files. It works fine currently but since each block contains it's own reference counter a crash could make the ref counts incorrect, and unfortunately I can't think of a better, more atomic way to handle that. The other big drawback is speed, it's about 1/3 the speed of native file copying, and from profiling the code 80-90% of the time seems to be spent passing fuse messages in the main fuse-python library, with a little time being taken up by zlib and actual file writes.

If I could get something like that from a native filesystem that also supported journaling so you didn't have the refcount mess that would be pretty sweet. Plus I wouldn't have to waste time developing and supporting it :p

Slashdot.org

Introducing the Slashdot Firehose 320

Logged in users have noticed for some time the request to drink from the Slashdot Firehose. Well now we're ready to start having everybody test it out. It's partially a collaborative news system, partially a redesigned & dynamic next-generation Slashdot index. It's got a lot of really cool features, and a lot of equally annoying new problems for us to find and fix for the next few weeks. I've attached a rough draft of the FAQ to the end of this article. A quick read of it will probably answer most questions from how it works, what all the color codes mean, to what we intend to do with it.

Comment Re:Dammit (Score 1) 208

Well comments and replies naturally lend themselves to a tree, and the obvious way to store them is by using some self-referential parentid to the same table. In practice this becomes difficult for exactly the reason you cited, no recursion. But recursion is hard to optimize for a database which is why I presume it's not built in to SQL, but the answer for modeling trees in SQL is to use nested sets which allow you to extract parts of a tree and determine the depth at the same time, it's a very fast operation because you are simply selecting a range of numbers which databases are very good at.

Slashdot Top Deals

Anything free is worth what you pay for it.

Working...