Comment Re:If you require people in the office, turn off V (Score 1) 38
There's an easy solution to that. Just turn on the VPN after hours only!
There's an easy solution to that. Just turn on the VPN after hours only!
The problem with a complete lack of adversity is that people don't learn to independently solve problems. They have never experienced failure. Everything has been given to them on a platter without the possibility of failure.
I work in R&D. Failure is expected, but some of these new entrants seem to have never experienced it at home, school, or university. When they hit it repeatedly, they get upset and depressed over it. But that's just part and parcel of doing new and interesting stuff. You keep on trying, and eventually success will happen. But it does require some innate grit and determination to plough on past the setbacks.
You're right that there's a lot of knowledge available. But knowledge is not understanding. And knowledge alone won't solve problems.
I've worked with quite a few new entries to the workplace, and one thing I've found is that [greatly generalising] there's a noticeable lack of drive and determination. When faced with a problem, the first thing they do is google for the answer, maybe try stack overflow and then... give up. There's no inherent desire to prove themselves capable by researching, reading, experimenting or even asking for help. They didn't even *try*, and that's the most frustrating part of it all.
This isn't all of them of course. But there's something missing from a significant number. One other thing I've noticed is that none of them read. They aren't subject experts because they never had the intrinsic motivation and interest to expand their knowledge. By the time I left university I had read dozens of programming and computer science books and had at least three languages under my belt (C, C++, Perl) and I didn't even study that subject! When some of them eventually ask for help, I'll often give them an immediate and detailed response and their response is usually "how do you know all this stuff", to which I'll reply "I read it in a book". Immediate access to knowledge through internet searches is no substitute for deep learning, and that's where I think a lot of the problems lie.
You need both. When I'm onsite interruptions are both expected and necessary. Can it be annoying and frustrating? Absolutely. But if other people need my help or input and direction, that's what I'm there for. If I didn't get interrupted, those people would get stuck, whereas those small and large interruptions help the rest of the company be productive without having to wait days, weeks (or never) for a useful interaction.
I find that most of the complaints like this, and indeed a lot of the pro-homeworking arguments, come largely from a very selfish perspective where it's all about the convenience and comfort of the individual, while completely ignoring that there is a lot of value in direct interpersonal interaction that benefits the company as a whole even while it may be a small inconvenience for the individual.
Just to add a bit of context. Before I quit as a Debian developer, I implemented mounting of
The sequencing of what can be done before and after the
Have you read the CDDL licence? It comes with an explicit patent grant. It's freely embeddable in other projects of any other licence you care to use, from BSD to proprietary. The legal risk posed by ZFS is negligible, because the CDDL explicitly granted you all of the rights to use and distribute; Oracle, or any other rights holders, aren't going to be able to sue when their own licence terms state what they allowed you to do in black and white.
The "incompatibility" with the GPL is solely due to the way the GPL is written. The CDDL is absolutely fine being embedded in projects under other licences, and OpenZFS is used in plenty of other systems without any problems whatsoever.
They certainly make money from advertising. But they could have had end users as bona fide customers as well, if only they had tried.
$5/month for API access and advertising
$10/month for API access and no advertising
No need to charge the makers of client applications. Charge the end user directly and let them use their client of choice.
"Color coordinates are color coordinates."
Are they really? If you read up on the history of colour, you'll quickly see that it's a complex problem. Pantone provided a solution and won out over the early alternatives.
There's a reason that we don't have an open standard for this stuff. It's because it's hard, and it's because it requires a lot of up front effort to define and then a lot of ongoing effort to work with specific industries to develop the materials and chemistry to interoperate with the standard. The standard isn't just a paper document, it's defined ratios of specific base pigments. It's a physical tangible thing that they manufacture. Pantone is the glue that lets you reproduce the same colour in print, on fabric, glass or painted onto metal, or (where possible) on a computer or TV display. That required a significant amount of time and effort working with a diverse range of industries and industrial processes. We might want an open solution here, but the reality is that Pantone did all of the hard work, and companies that want accurate colour matching pay them handsomely for high quality results.
While I have no doubt that the theoretical side of things is tractable for open source developers, the reality is that what Pantone provide are standard reference materials of defined colours. You're not going to get those manufactured to a defined standard by a bunch of random people on the internet. There will be differences, and it would not be "standard". That's why you pay them, you get a guaranteed standard reference colour, with a defined lifetime and known properties.
Sorry if I misread it!
A single wavelength would not be sufficient. You couldn't represent colours which combine red and blue. A linear shift from B to G to R will allow the representation of cyan and yellow. But not magenta. You would need a complete emission spectrum spanning all visible wavelengths from near-UV to near-IR. Both "RGB" and "CMY" are very crude approximations of that. But you could expand it by sampling more points across the visible range.
We also need to consider that this would be a specification for emission and is an additive colour model. But most applications outside RGB displays are subtractive, using absorption and reflection. Saturation would be inverted.
Pantone also caters for gloss and metallic aspects.
Plenty of existing file formats can be adapted to represent more complex colour models. TIFF has a PhotometricInterpretation tag which can be (amongst others), RGB or CMYK. It can be extended to define a different interpretation with more samples per pixel. Most of the other common formats are fairly limited in this aspect.
I'm not personally familiar with printing workflows, though I have worked with dyes. I would have thought you would be working with spot colour (indexed) rather than RGB or CYMK to unambigously reference specific Pantone colours. It would be up to the printer to define the formulation to recreate that physically.
With regard to the last two paragraphs. I think you're coming at this from the wrong direction. Yes, the Pantone colours have RGB values as approximations for reference. But that's computed from the colour, you can't map it backward and get the Pantone colour from the RGB value. Pantone is not RGB values. It's defining colour as a mixture of several (~14) base pigments each of which has a defined colour (absorption and emission spectra). Their whole catalogue is derived from various formulations of these base colours. What you're buying from them is a physical product of these printed onto a specific type of paper, and you use these as the calibrated reference for all colour comparions. Some of these can be represented as RGB, and some more of them can be represented as CMYK, but CMYK has a number of limitations just as does RGB in that it's incapable of representing the full range of the base colours.
In other words, it's not about "data structures", or even about colourspace conversions at its most fundamental level. It's about defining colour physically by mixing colours, and then using these pigment ratios as the standards against which equipment and processes are calibrated. This has a lot of value. It's why the manufacturing industry runs on Pantone colour matching. We can't say that "RGB" or "sRGB" are better or worse here--it's a completely separate problem domain, for which neither are the solution.
The "problem" with RGB is that it *doesn't* specify a spot on the electromagnetic spectrum. It's just a ratio of three lines on that spectrum that approximate to "red", "green" and "blue" photoreceptor responses in our eyes. The specific emission spectra for the R, G and B components and the spectral overlap between them is undefined. It depends very much on the specific output device (phosphors, filters, white light source used, LED materials etc.). So the basic premise is already broken--RGB doesn't define a wavelength, it's an approximation rooted in our perception of colour. And it doesn't cover the full spectrum either. It's just three arbitrary points on a continuum.
The other part is that "light" for illumination isn't standardised. We have all sorts of white light sources. Natural daylight, arc lamps, deuterium lamps, tungsten filament, halogen, fluorescent tubes, LEDs, and more. We also have a lot of non-white light sources. The item we need to colour match will look subtly or unsubtly different under all of these different conditions. Making this "implementation independent" is impossible, because we can't standardise the world we live in to only use a single type of light source.
None of this is new. Manufacturers are well aware of all of this, for everything from metallic car paint in daylight vs fabric in clothing stores under specific shop lighting. Pantone is the system used to guarantee that the designer's colour choices are perfectly matched on the final product under the intended lighting conditions.
You're completely free to make your own independent set of colour codes. If you want to make your own standard based upon physical measurements with a spectrophotometer, no one is stopping you.
You're not paying for the codes. You're paying for the whole Pantone process of colour mixing, which ultimately produces the physical reference swatches for physically matching those swatches to the output of your process to ensure they are aligned and you are getting the specified colour exactly. I've done work in a dye lab where we used those physical books of colours to colour match dyes on fabric to the reference swatches. That's ultimately what Pantone is for--physical colour matching. Using it on a computer is not what it's really about other than a convenience for designers, unless you are colour-matching with a calibrated display.
This is where people get confused. Pantone was not for digital image display. It's about colour in the real world. You'll use it digitally for printing. But there are many industries which use the physical cards to tune physical processes to make plastics, paints, dyes and such with specific colour properties, using Pantone as the reference system for that.
"RGB" values are meaningless without a specified whitepoint and specified gamma curve. It's just three numbers, without any additional description of how to translate them to physical reality. That translation will be different depending upon the output medium, be it a CRT monitor, TFT-LCD display, OLED display, laser printer, web offset-printed magazine, dyed fabric, coloured plastic, paint, or whatever else you can apply colour to. Pantone unifies the colour specification for all of these media.
Did you know that when you dye fabric you don't just specify the colour, you also specify the lighting conditions? Because the whitepoint of the light source affects the perception of colour, the dyes are matched to the pantone colour swatch under the specific lighting conditions. The same applies to other applications.
RGB doesn't even start to be sufficient for any of these applications. You need to firstly know whether you are using a linear scale or need to apply a calibration curve such as a gamma curve, then you need to know the whitepoint, and you also need to know the characteristics of the processes and media in use to do additional correction.
RGB is a handy encoding for RGB displays which was and is directly related to the display technology, so is more of a process-specific encoding than a definition of the output (S-RGB and others are more useful because this is specified, but only for certain display types). Pantone is a handy specification of output, which doesn't specify how you obtain that output. It's a reference for colour matching.
For the most part, I would think this won't apply. Most firmware is being used on discrete parts which don't have any external connectivity of any sort. On your NIC or BMC, maybe that's a concern, but on other parts it's simply not going to do anything except its single job. It won't be wired up to anything but the parts it needs to control.
Some of these parts do have proper CPUs on them, for example my HBA has a PowerPC processor on it. These are, to all intents and purposes, completely independent computer systems within the larger system. They do have the potential to be repurposed or updated. Then you have all of the tiny MCUs scattered all over the place, still somewhat conventional cores but very specialised and may require special toolchains. Then you have even smaller parts, plus FPGAs etc which are even less accessible to the free software developer. There's a spectrum of accessibility and complexity there, and I don't see a clear-cut line you could draw to separate "must require open firmware" and "closed firmware is acceptable since there's no other choice".
This is where (as I mentioned in the original article comments), that this is a place where the FSF and free software community have gone somewhat off the rails. None of these discrete but connected systems have anything to do with code running on the main CPU, they are completely independent. For the most part, these are black boxes which perform defined functions with well-defined interfaces. The vast majority could not be meaningfully updated by free software hackers even if they were accessible, because they are highly-specialised and require both expert domain knowledge and the means to properly test and validate the changes work properly. Successfully maintaining working and safe firmware is in practice going to be out of reach for many types of device.
When writing software for mainstream CPUs, we're used to both free software and proprietary software that comes with no guarantees and no liability for misbehaviour. For the most part, the software we write on CPUs is fairly free of consequences if things go wrong. However, a lot of firmware comes with actual legally-binding guarantees, so the parts can be used in safety-critical systems, and part of this is having rigorous verification and validation of both the hardware and software in combination. If you start messing around with custom firmware on a GPU or NIC, that's one thing. But what about custom firmware on a battery charger? A misunderstanding or mistake there could lead to a raft of serious fires and risk to property and life. We're not used to having to deal with regulatory and legal implications when we write free software, but a good amount of proprietary firmware has to do exactly that.
I'm not sure exactly where is appropriate to draw the line between where the free software control of the core system gives way to proprietary firmware on connected peripherals, but it's certainly not black-and-white. There are practical limits to what is possible here, and there are as mentioned non-technical considerations to factor in as well.
There is also the question of transiently-connected devices. If I make a device with an MCU in it, which supports connectivity over USB, including firmware update, why should I be compelled to make my system fully open? I do sometimes wonder about the absurd entitlement of many Linux users.
That is not the point I was making.
Yes, and no.
The Linux kernel has one key difference. At its core, it is a reimplementation of the Unix/POSIX system design, based upon several decades of prior art in both design and implementation. This has meant that while the developers are certainly talented and hard-working, for the most part they have not been undertaking complex original design. They are reimplementing a system which is well understood and which has already been implemented several times over.
When it comes to greenfield development needing detailed requirements, design specifications, implementation, testing and iterative rework, I suspect that in-person collaboration comes more into its own.
Note that we can see this even within the Linux kernel. While they have done a pretty good job reimplementing the classic interfaces, this cannot be said of the newer features which have been done from scratch. A lot of the newer features like cgroups, epoll, btrfs, and such are individually functional but can interact badly. Each feature has been developed in isolation without consideration for the implications (e.g. security implications) of its integration into the wider system, and I think the disconnected mode of development is partly responsible for this. Contrast with in-person interactions, and a lot of design flaws and limitations might have been shaken out very early on with some immediate critical feedback.
The other aspect to consider is that if you make the assumption that Linux developers are a set of self-selected high-performing experts, they are uniquely capable of operating in a disconnected fashion. If you look at the spread of experience, capabilities and performance in a typical company, it's quite clear that not everyone is suited to working completely independently. Many people need in-person interactions to keep them focussed, on-track, motivated and productive.
The Linux developers had the luxury of skipping the design and jumping straight into the fun stuff: coding and implementation. But look at Btrfs as a single example. They started coding before the design was final, then iterated on the design several times requiring costly and buggy revising of the code, then froze the on-disc format before the design was ready for that, and are now stuck in a tarpit of their own making where they are unable to fix all the design mistakes and are in a purgatory of endless bugfixing of the mess they created. Contrast with ZFS where the design was formalised and validated by a small team of experts before any coding started. And it worked without serious dataloss since day one. I do believe that the in-person approach has a lot of value.
Torque is cheap.