The question isn't whether we should replace filesystems, but rather if we should move core file system services *into* the filesystem. That is, should we embed all of the things that locate does into the filesystem? My answer would be "no" (I prefer single-task entities where possible), but making a filesystem "hook" wouldn't be bad (i.e., trigger X when a file is updated, where X might be an indexing operation). Perhaps we should standardize more metadata, where it is stored, and how it is accessed. There's nothing wrong with storing that *somewhere*. Whether it is the filesystem or elsewhere is a bit of an implementation detail.
My wife and I teach a homeschool co-op, so we have had to do a lot of searching for low-cost solutions for mixed-mode classes. The same results would probably work well for less in corporate offices.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmindmatters.ai%2F2020%2F08...
For anyone who wants to see what SPU code looked like, here is a an old article of mine from IBM's DeveloperWorks on the subject:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.ibm.com%2Fdeveloperw...
Anywhere where you have a second derivative where the variable with which you are taking the derivative with respect to is dependent on another variable. You would previously have to use Faa di Bruno's formula to properly take care of this situation. Now you can just do algebraic manipulations.
I recently had another paper which sat for 4 MONTHS in the editors inbox, before he decided he just wasn't interested.
What needs to happen is to have a small change in policy like this:
1) You can submit to multiple journals at once
2) A journal makes an offer to send it for review
3) Accepting an offer @2 requires that you remove your submission from other journals
Then the procedure goes on as before. This will prevent editors from wasting everyone's time.
What's super-super frustrating is that I had a *different* paper that got rejected because it needed a proof of a result, but the proof was outside the scope of the first paper. So, I have a different paper that was waiting an extra 4 months because it needs this other paper to be reviewed first.
The only reason I don't just self-publish everything is that peer review helps me convince myself that I'm not crazy.
If you read my paper, I actually suggest this as a shortened form of my own. This notation is Arbogast's, and is woefully underused. I show how to interconvert between Arbogast notation and my own in the paper.
Not quite. d(1) *is* zero. The differential of a constant is zero, basically by definition. If e is an infinitesimal, 0/e is still zero. However, d^2x/dx^2 != d(dx/dx)/dx. d(dx/dx)/dx, using the new notation, is "d^2x/dx^2 - (dx/dx)(d^2x/dx^2)", which is obviously zero by inspection.
The problem with e-book math books is trying to make it look right on a small screen. If you just want a PDF of it, send me an email and I'll send you one, especially if you consider telling other people how great it is. Unfortunately, you can't just tell Amazon to take your PDF and make it an e-book
I've actually got a second paper on partial derivatives just about ready to go. It was originally part of this paper, but it got a little long, and I wanted to rethink and clarify a few concepts. Anyway, partial differentials have the same notational problem *plus* one more. The problem is that there are several partial differentials which all go by the same name. Once you name them properly (i.e., give them each a distinct name) the problems go away.
My coauthor has been doing this to good effect. His book "Controllability of Dynamic Systems: The Green's Function Approach" utilizes it. My role in mathematics is primarily in teaching high schoolers, so I don't spend a lot of time with differential equations. That's also the reason I *have* a co-author. I needed someone to tell me I wasn't crazy
Except that, in the first derivative, it *is* used as a fraction. Otherwise you couldn't reformulate your equation for integration (i.e., you have to multiply both sides by dx, which is treating it as a fraction). So, to say that in one case, it is a fraction, but this next case it isn't, but still written as a fraction, even though it *could* be written as a fraction, but we just decided not to, seems strange, at least to me.
You never did a second derivative test to determine whether you are at a local minima or maxima?
Most intro calculus books at least show the notation for the second derivative. However, it is true that they rarely take it far enough to hit any problems with the notation.
I actually figured this out while trying to find a good way to explain the notation to my students, which is a homeschool co-op class (I have a range of 9-12 graders - the 9th grader is an exception, but she is ridiculously smart). I read through numerous calculus textbooks trying to find the justification for the notation, and none of them even attempted it. So, I decided to try it out myself, and found out that the standard notation was wrong.
This is my thought as well. Interestingly, I developed this while writing a book (Calculus from the Ground Up) to use for my homeschool co-op calculus classes. I was trying to find a good way to explain the notation, and I literally had 20 calculus books that I read through trying to find a good explanation for the standard notation in any of them. None of them even attempted an explanation, just "this is the way it is, but don't treat it as a fraction." So, I tried to deduce the notation myself. That's when I realized that it was not just limited, it was actually wrong. So I wrote the paper and finished the book (it's Appendix B in the book).
It's a bit of both. Some of the facts of the matter were known, but it was assumed that this was just "the way it was". That is, no one considered it an open problem. For instance, we view the inability to divide by zero just a fact of mathematics, not a flaw. Likewise, this was not known to be a flaw, it was just assumed that this was the way things worked.
If you need to point to a definitive flaw, it was in our understanding of how it was supposed to work - the relationship between our understanding and the notation. Once *that* flaw was discovered, the actual notation just spilled right out. That is, the flaw was that people were *not* treating dy/dx *sufficiently* as a fraction, due to 19th century preferences against infinitesimals. Once you realize that dy/dx really is a fraction, and has to be treated accordingly, everything automatically works.
It's almost humorous because there was no real advanced work to do. Literally everything needed is available in intro calculus. The problem was (a) the mathematics community had a habit of *not* treating dy/dx as a fraction, and (b) new students who didn't know better were simply taught *what* to do, not *why* to do it, and continued to repeat the mistake for over a century.
Happiness is a hard disk.