The 'dongle' referred to is probably the requirement to buy a MacOS-running computer to develop & publish in Xcode for iOS devices. (Since you can only legally virtualize MacOS on Apple-branded hardware; because Apple refuses to license OSX to be run on any virtualization host)
Leaving the controller running (Mine's running on an Ubuntu VM on my NAS) also lets you track bandwidth usage, in case you have limits or capacity issues you're trying to monitor for.
Also also, you can (OPTIONALLY) configure the controller for remote login - sign into https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Funifi.ubnt.com%2F and gain the ability to remotely manage your network.
MSPs use this extensively, I use it to help family that wanted a more secure option & replace an aging OpenWRT Buffalo device that only did 2.4GHz and wasn't getting updates anymore. (I update theirs shortly after I update mine) Could also link multiple sites off one controller, and just host it for them. (Bit involved to get it set up initially this way as you need DNS entries for the controller then; I haven't gone down this path yet so haven't confirmed how hard)
They also sell a CloudKey (Intel Compute Stick, basically) that can run the controller; but since it only has 8/16GB of flash they don't recommend doing logging on the device.
I was able to hook mine up to my Windows 7-running desktop, and use it for the Steam Big Picture mode; not everything worked out-of-the-box and I didn't putz with it to get it fully functional; but a little tweaking and it'd probably be perfect. (This was ~Sept 2013 or so)
There is also an Android App "Blue Board" that lets you use your Android phone or tablet as an input device on the Ouya (You install it on both the Ouya and controlling device). Makes keyboard input much easier (if you're using it for web surfing and such).
...and will likely suffer the same fate.
You're describing UltraVNC Single Click: http://www.uvnc.com/products/u...
http://devnull-as-a-service.com/ - as long as we're outsou^h^h^h^h^h^hmoving everything to a managed service, why not
http://semiaccurate.com/2013/10/21/microsoft-admits-image-net-consumer-negative/
Because they've realized the 'Microsoft' name has such negative connotations in the consumer market, that they don't want CxO's shooting it down based on its name, and one that wasn't directly tied to their Windows environment, since its where they want you to run your Linux VMs "In The Cloud":
"...we knew that we needed to ensure that Windows is the best platform to run Linux workloads as well as open source components.
BSA has Trademarks that are 'infringed' by this organization's name - which you are required to actively defend against anything that could be infringing - otherwise you lose your Trademark. (This is not true with Copyrights, just Trademarks)
As a fellow Eagle Scout, I agree it isn't wonderful or ideal behavior - but if they want to keep their name (and with all the splinter-orgs as a result of their recent decision regarding Youth membership, there are plenty) and uniqueness in 'Brand Identity' they have to do this.
That, and in the US, most carriers don't offer a 'discounted' service for buying the device outright (or bringing your own); so if you're going to use the service anyway, may as well get the discounted phone.
The phrase '$1 Billion' gets people to sit up and notice.
But most of this work won't benefit the Linux community and software at large, at least directly. It will be ancillary improvements; where something gets re-written/improved/fixed due to issues on the POWER architecture that happen to benefit everyone else too. Hopefully these are many and useful.
Still, any investment shows that Linux is Serious Business.
So does that mean when the servers are down, I'm supposed to pull the secretary into the meeting where we try to fix it?
How about the janitors?
You let the folks you hired for that task, work on that task. You don't reassign everyone to focus on one thing, that is overkill and a waste.
For situations where the agents can't talk back to the Puppet Master, you can push out the manifests (config files) to each host and apply them directly, locally. (As if it were a single, standalone machine)
Not sure if there is a way to push the results back to a Puppet Master for aggregation, but there may be a way to tackle that. (Or just back to a central logging server for parsing)
Also this way there is a globally-accessible and searchable database of all the materials and their various properties - so for your exotic project with a weird requirement, you can find the materials most appropriate to your situation.
This is useful for more than coming up with a single solar cell, it helps pave the groundwork for hundreds of varieties - each the best-fit for a different situation.
Example: Organic compounds may make sense if you can 'grow' the system for a self-repairing/expanding system, say in a biodome on Mars; or on a floating station in the Arctic; both of which you won't have an easy opportunity for a 'service call'. Identifying which one(s) work best in those environments will shave years off development time, allowing a focus on other design issues.
http://www.packtpub.com/puppet-3-beginners-guide/book
(Currently) $23 USD for the eBook, and $45 USD for the Print + eBook access, and no Amazon-Kindle-DRM. (But you can still get it in a
Nothing is faster than the speed of light ... To prove this to yourself, try opening the refrigerator door before the light comes on.