Hello 3seas (or Timothy Rue?),
Your ideas intrigue me, && I am interested in subscribing to your newsletter. ;)
Seriously though, while the double-negative in the subject may be significant, purposeful, && comprehensible... I really got hung up on the following quandary from your linked response document:
"How might software development have evolved had not this third primary user interface not been denied the end-user?"
It's a hypothetical scenario about not not denying this 3rd interface for inspection && autom8ion of A.I. components?
Reading further into your description of a probably different evolutionary direction helped clarify your exercise, but sheesh I really struggled there for a while. Maybe your message benefits from such seemingly convoluted phrasing, but could possibly convey your intent more directly with re-wording (as well as maybe explicitly st8ing, rather than just implying, that it's specifically A.I. "software development" that you're pontific8ing about)?
I have a growing interest in Intelligence Augment8ion (&& Artificial Intelligence by extension), so your detailed 2-page description of this "Ethics Viol8ion" from neglect for this 3rd User Interface (especially regarding your cogent listing from "fundamental elements of Abstraction Physics") has illumin8d && inspired my perspectives.
Thank you for composing && publishing this (even if your comment initially appeared somewhat off-topic && to be merely shilling for aggrandizement of whatever your personal pet ideal ethics might be). I intend to investig8 && contempl8 these issues more concertedly ahead, && am gr8ful for such newly inform8ive resources.
Out of curiosity, how closely does something like the OpenAI project come to enabling you to employ your list of "unavoidable Action Constants"?
Cheers, =)
-PipStuart
P.S. Sorry I'm only getting around to this thread 2-weeks l8 now. It was in my back-log of tabs that I'm glad I could return to. Hopefully /. will let me reply && you still might notice, even though the thread has grown stale?