Comment Re:They could do that? (Score 1) 104
I am unsure if you are aware, but we have actually tried the no or less government thing. Some places are still even running that experiment! How do you like their standard of life ?
I am unsure if you are aware, but we have actually tried the no or less government thing. Some places are still even running that experiment! How do you like their standard of life ?
Another video echoing the point on the risks of AI combined with "bad" capitalism: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"(8:54) So just to summarize: We're currently releasing the most powerful inscrutible uncontrollable technology that humanity has ever invented that's already demonstrating the behaviors of self-preservation and deception that we thought only existed in sci-fi movies. We're releasing it faster than we've released any other technology in history -- and under the maximum incentive to cut corners on safety. And we're doing this because we think it will lead to utopia? Now there's a word for what we're doing right now -- which is this is insane. This situation is insane.
Now, notice what you're feeling right now. Do do you feel comfortable with this outcome? But do you think that if you're someone who's in China or in France or the Middle East or you're part of building AI and you're exposed to the same set of facts about the recklessness of this current race, do you think you would feel differently? There's a universal human experience to the thing hat's being threatened by the way we're currently rolling out this profound technology into society. So, if this is crazy why are we doing it? Because people believe it's inevitable. [Same argument for any arms race.] But just think for a second. Is the current way that we're rolling out AI actually inevitable like if if literally no one on Earth wanted this to happen would the laws of physics force AI out into society? There's a critical difference between believing it's inevitable which creates a self-fulfilling prophecy and leads people to being fatalistic and surrendering to this bad outcome -- versus believing it's really difficult to imagine how we would do something really different. But "it's difficult" opens up a whole new space of options and choice and possibility than simply believing "it's inevitable" which is a thought-terminating cliche. And so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability. We can't do something else if we believe it's inevitable.
Okay, so what would it take to choose another path? Well, I think it would take two fundamental things. The first is that we have to agree that the current path is unacceptable. And the second is that we have to commit to finding another path -- but under different incentives that offer more discernment, foresight, and where power is matched with responsibility. So, imagine if the whole world had this shared understanding about the insanity, how differently we might approach this problem..."
He also makes the point that we ignored the downsides of social media and so got the current problematical situations related to it -- and so do we really want to do the same with way-more-risky AI? He calls for "global clarity" on AI issues. He provides examples from nuclear, biotech, and ozone on how collective understanding and then collective action made a difference to manage risks.
Tristan Harris is associated with "The Center For Humane Technology" (of which I joined their mailing list while back):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.humanetech.com%2F
"Articulating challenges.
Identifying interventions.
Empowering humanity."
Just saw this yesterday on former President Obama talking about concerns about AI not being hyped (mostly about economic disruption) and also how cooperation between people is the biggest issue:
"ORIGINAL FULL CONVERSATION: An Evening with President Barack Obama"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"(31:43) The changes I just described are accelerating. If you ask me right now the thing that is not talked about enough but is coming to your neighborhood faster than you think, this AI revolution is not made up; it's not overhyped.
I would respectfully build on what President Obama said by adding that a major reason why getting people to cooperate about such technology is because we need to shift our perspective as suggested with my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
I said much the same in the open letter to Michelle Obama from 2011:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fopen-le...
One thing I would add to such a letter now is a mention of Dialogue Mapping using IBIS (perhaps even AI-assisted) to help people cooperate on solving "wicked" problems through visualizing the questions, options, and supporting pros and cons in their conversations:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fcognitive-science.info...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fmedia%2Fl...
Here is one example of some people working in that general area to support human collaboration on "wicked problems" (there are others, but I am conversing with related people at the moment): "The Sensemaking Scenius" (as one way to help get the "global clarity" that Tristan Harris and, indirectly, President Obama calls for):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.scenius.space%2F
"The internet gods blessed us with an abundance of information & connectivity -- and in the process, boiled our brains. We're lost in a swirl of irrelevancy, trading our attention, at too low a price. Technology has destroyed our collective sensemaking. It's time to rebuild our sanity. But how?
Introducing The Sensemaking Scenius, a community of practice for digital builders, researchers, artists & activists who share a vision of a regenerative intentional & meaningful internet."
Something related to that by me from 2011:
http://barcamp.org/w/page/4722...
"This workshop was led by Paul Fernhout on the theme of tools for collective sensemaking and civic engagement."
I can hope for a convergence of these AI concerns, these sorts of collaborative tools, and civic engagement.
Bucky Fuller talked about being a "trim tab", a smaller rudder on a big rudder for a ship, where the trim tab slowly turns the bigger rudder which ultimately turns the ship. Perhaps civic groups can also be "trim tabs", as in: "Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has. (Margaret Mead)"
To circle back to the original article on what Facebook is doing, frankly, if there are some people at Facebook who really care about the future of humanity more than the next quarter's profits, this is the kind of work they could be doing related to "Artificial Super Intelligence". They could use add tools for Dialogue Mapping to Facebook's platform (like with IBIS or similar, perhaps supported by AI) to help people understand the risks and opportunities of AI and to support related social collaboration towards workable solutions -- rather than just rushing ahead to create ASI for some perceived short-term economic advantage. And this sort of collaboration-enhancing work is the kind of thing Facebook should be paying 100 million dollar signing bonuses for if such bonuses make any sense.
I quoted President Carter in that open letter, and the sentiment is as relevant about AI as it was then about energy:
http://www.pbs.org/wgbh/americ...
"We are at a turning point in our history. There are two paths to choose. One is a path I've warned about tonight, the path that leads to fragmentation and self-interest. Down that road lies a mistaken idea of freedom, the right to grasp for ourselves some advantage over others. That path would be one of constant conflict between narrow interests ending in chaos and immobility. It is a certain route to failure. All the traditions of our past, all the lessons of our heritage, all the promises of our future point to another path, the path of common purpose and the restoration of American values. That path leads to true freedom for our nation and ourselves. We can take the first steps down that path as we begin to solve our energy [or AI] problem."
> the issue right now is that terrorist groups, Hamas, and Iranian groups, are and did attack Israel, that's not up for debate
This statement makes a pretty good job of reframing the conflict while ignoring ~80 years of history. It seems many people forget the wars of the 1950s. Check who did the actual first attack. Not just posturing and massing troops, but actual attack. Hint: It wasn't the Arabs.
Some number already had a deport order issued by a judge
Some, yes.
that *was* their due process.
Correct.
In some other cases it could be expedited removal which is generally within 2 years of entry by sea or 14 days/100 miles of the land border.
Sure could.
But it wasn't, and I think you know that.
These guys were picking people up after court hearings without a deport order.
They were going for the low-hanging fruit.
The law he was trying to leverage doesn't require due process.
That's not how it works.
Due process isn't something granted by law, it is a constitutional right.
There are cases where that right may not apply, however. Trump weighed that calling these people "enemies" or "invaders" would put them into that category. He was wrong.
You are insinuating that most people being deported had no due process, and that is factually incorrect.
Actually, for a while, nearly 100% of them were deported without due process.
A Trump appointed judge put a stop to that bullshit.
The situation is now better, for sure, but let's not forget he literally raced to get people on planes out of the country before they could see a Judge.
60% of Americans are not confident that they will get sufficient health care for the rest of their lives.
Citation needed.
But that's a pretty understandable feeling, even if true.
Even being well off at this point in my life, I think my biggest concern about losing my job would be my healthcare. So I'm not really sure what the point you're trying to make there is.
That's not 'working well'.
Says you?
America is way down on the list of life expectancy for global nations. That's not 'working well' either.
I don't think healthcare is going to stop us from eating shit food, lol.
That's not 'working well' either.
It is for the fucksticks happily sacrificing years of their life to eat a 16 ounce steak every day. Who are you to judge what's "working well" for them?
And I still say your grandfather would have been able to produce more if not sick. Working hours is not the same as producing.
You can say it until you're blue in the face. You have no evidence to support it though, and there is plenty of evidence against it.
You wrote: "As useful as capitalism has proved to be, its motivations are primitive and short sighted. How AI is being punted is another example of "bad" capitalism. Bad capitalism has helped wreck the planet more than anything else."
Geoffrey Hinton, as a self-professed socialist, makes a version of your point in the interview previously linked to.
And your point is ultimately the key insight emerging from our discussion, as I reflect on it. AGI or especially ASI may indeed take over someday to humanity's detriment, but that is likely in the future if it happens. What is the biggest threat right now to most humans is other humans developing and using AGI or ASI within a capitalist framework.
I wrote to Ray Kurzweil about something similar back in 2007, responding to a point in one of his books where he was suggesting the best way to quickly get AI was for competitive US corporations to create it. I suggested essentially that AI produced through competition is more likely to have a bad outcome for humanity than AI produced through cooperation. I'd suggest the points there could be said about several current AI entrepreneurs. Someone I sent it to put it up here, and I will include a key excerpt below:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fheybryan.org%2Ffernhout%2F...
That said, other systems like, say, in the USSR have their own legacies of, say, environmental destruction and suffering (as with Chernobyl). So Capitalism has not cornered the market on poor risk management -- even though the ideal of any capitalist enterprise is to privatize gains while socializing risks and costs.
Here is one book of many I've collected on improving organizations (maybe of tangential relevance if you are thinking about organization improvement for your project):
"Reinventing Organizations: A Guide to Creating Organizations Inspired by the Next Stage of Human Consciousness" by Frédéric Laloux
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fgithub.com%2Fpdfernhout%2F...
"Reinventing Organizations is a radical book, in the sense that radical means getting to the root of a problem. Drawing on works by other writers about organizations and human development, Frédéric Laloux paints a historical picture of moving through different stages of organizational development which he labels with colors. These stages are:
* Red (impulsive, gang-like, alpha male)
* Amber (conformist, pyramidal command and control)
* Orange (achievement, mechanistic, scarcity-assuming, cross-functional communications across a pyramid)
* Green (pluralistic, inverted pyramid with servant leadership and empowered front line)
* Teal (evolutionary, organic, abundance-assuming, self-actualized, self-organizing, purpose-driven)."
Maybe we as a society need to become Teal overall -- or at least Green -- if we are to prosper with AI?
Good talking to you too, same.
--------- From book-review-style email to Ray Kurzweil in 2007
To grossly simplify a complex subject, the elite political and economic culture Kurzweil finds himself in as a success in the USA now centers around maintaining an empire through military preparedness and preventive first strikes, coupled with a strong police state to protect accumulated wealth of the financially obese. This culture supports market driven approaches to supporting the innovations needed to support this militarily-driven police-state-trending economy, where entrepreneurs are kept on very short leashes, where consumers are dumbed down via compulsory schooling, and where dissent is easily managed by removing profitable employment opportunities from dissenters, leading to self-censorship. Kurzweil is a person now embedded in the culture of the upper crust economically of the USA's military and economic leadership. So, one might expect Kurzweil to write from that perspective, and he does. His solutions to problems the singularity pose reflect all these trends -- from promoting first strike use of nanobots, to design and implementation facilitated through greed, to widespread one-way surveillance of the populace by a controlling elite.
But the biggest problem with the book _The Singularity Is Near: When Humans Transcend Biology_ is Kurzweil seems unaware that he is doing so. He takes all those things as given, like a fish ignoring water, ignoring the substance of authors like Zinn, Chomsky, Domhoff, Gatto, Holt, and so on. And that shows a lack of self-reflection on the part of the book's author. And it is is a lack of self-reflection which seems dangerously reckless for a person of Kurzweil's power (financially, politically, intellectually, and persuasively). Of course, the same thing could be said of many other leaders in the USA, so that he is not alone there. But one expects more from someone like Ray Kurzweil for some reason, given his incredible intelligence. With great power comes great responsibility, and one of those responsibilities is to be reasonably self-aware of ones own history and biases and limitations. He has not yet joined the small but growing camp of the elite who realize that accompanying the phase change the information age is bringing on must be a phase change in power relationships, if anyone is to survive and prosper. And ultimately, that means not a move to new ways of being human, but instead a return to old ways of being human, as I shall illustrate below drawing on work by Marshall Sahlins.
One of the biggest problems as a result is Kurzweil's view of human history as incremental and continual "progress". He ignores how our society has gone through several phase changes in response to continuing human evolution and increasing population densities: the development of fire and language and tool-building, the rise of militaristic agricultural bureaucracies, the rise of industrial empires, and now the rise of the information age. Each has posed unique difficulties, and the immediate result of the rise of militaristic agricultural bureaucracies or industrialism was most definitely a regression in standard of living for many humans at the time. For example, studies of human skeleton size, which reflect nutrition and health, show that early agriculturists were shorter than preceding hunter gathers and showed more evidence of disease and malnutrition. This is a historical experience glossed over by Kurzweil's broad exponential trend charts related to longevity which jumps from Cromagnon to industrial era. Yes, the early industrial times of Dickens in the 1800s were awful, but that does not mean the preceding times were even worse -- they might well have been better in many ways. This is a serious logical error in Kurzweil's premises leading to logical problems in his subsequent analysis. It is not surprising he makes this mistake, as the elite in the USA he is part of finds this fact convenient to ignore, as it would threaten the whole set of justifications related to "progress" woven around itself to justify a certain unequal distribution of wealth. It is part of the convenient ignorance of the implications that, say, the Enclosure acts in England drove the people from the land and farms that sustained them, forcing them into deadly factory work against their will -- an example of industrialization creating the very poverty Kurzweil claims it will alleviate.
As Marshall Sahlins shows, for most of history, humans lived in a gift economy based on abundance. And within that economy, for most food or goods people families or tribes were mainly self-reliant, drawing from an abundant nature they had mostly tamed. Naturally there were many tribes with different policies, so it is hard to completely generalize on this topic -- but certainly some did show these basic common traits of that lifestyle. Only in the last few thousand years did agriculture and bureaucracy (e.g. centered in Ancient Egypt, China, and Rome) come to dominate human affairs -- but even then it was a dominance from afar and a regulation of a small part of life and time. It is only in the last few hundred years that the paradigm has shifted to specialization and an economy based on scarcity. Even most farms 200 years ago (which was where 95% of the population lived then) were self-reliant for most of their items judged by mass or calories. But clearly humans have been adapted, for most of their recent evolution, to a life of abundance and gift giving.
When you combine these factors, one can see that Kurzweil is right for most recent historical trends, with this glaring exception, but then shows an incomplete and misleading analysis of current events and future trends, because his historical analysis is incomplete and biased.
So, this would suggest more caution approaching a singularity. And it would suggest the ultimate folly of maintain[ing] R&D systems motivated by short term greed to develop the technology leading up to it. But it is exactly such a policy of creating copyright and patents via greed that (the so called "free market" where paradoxically nothing is free) that Kurzweil exhorts us to expand. And it is likely here where his own success most betrays him -- where the tragedy of the approach to the singularity he promotes will results from his being blinded by his very great previous economic success. If anything, the research leading up to the singularity should be done out of love and joy and humor and compassion -- with as little greed about it if possible IMHO. But somehow Kurzweil suggests the same processes that brought us the Enron collapse and war profiteering through the destruction of the cradle of civilization in Iraq are the same ones to bring humanity safely thorough the singularity. One pundit, I forget who, suggested the problem with the US cinema and TV was that there were not enough tragedies produced for it -- not enough cautionary tales to help us avert such tragic disasters from our own limitations and pride.
Kurzweil's rebuttals to critics in the last part of the book primarily focus on those who do do not believe AI can work, or those who doubt the singularity, or the potential of nanotechnology or other technologies. One may well disagree with Kurzweil on the specific details of the development of those trends, but many people beside him, including before him, have talked about the singularity and said similar things. Of the fact of an approaching singularity, there is likely little doubt it seems, even as one can quibble about dates or such. But the very nature of a singularity is that you can't peer inside it, although Kurzweil attempts to do so anyway, but without enough caveats or self-reflection. So, what Ray Kurzweil sees in the mirror of a reflective singularity is ultimately a reflection of -- Ray Kurzweil and his current political beliefs.
The important thing is to remember that Kurzweil's book is a quasi-Libertarian/Conservative view on the singularity. He mostly ignores the human aspects of joy, generosity, compassion, dancing, caring, and so on to focus on a narrow view of logical intelligence. His antidote to fear is not joy or humor -- it is more fear. He has no qualms about enslaving robots or AIs in the short term. He has no qualms about accelerating an arms race into cyberspace. He seems to have an significant fear of death (focusing a lot on immortality). The real criticisms Kurzweil needs to address are not the straw men which he attacks (many of whom are being produced by people with the same capitalist / militarist assumptions he has). It is the criticisms that come from those thinking about economies not revolving around scarcity, or those who reflect of the deeper aspects of human nature beyond greed and fear and logic, which Kurzweil needs to address. Perhaps he even needs to addres them as part of his own continued growth as an individual. To do so, he needs to intellectually, politically, and emotionally move beyond the roots that produced the very economic and political success which let his book become so popular. That is the hardest thing for any commercially successful artist or innovator to do. It is often a painful process full of risk.
I do not intend to vilify Kurzweil here. I think he means well. And he is courageous to talk [a]bout the singularity and think about ways to approach it to support the public good. His early work on music equipment and tools for the blind are laudable. So was his early involvement with Unitarians and social justice. But somewhere along the line perhaps his perspective has become shackled by his own economic success. To paraphrase a famous quote, perhaps it is "easier for a camel to go through the eye of a needle than a rich man to comprehend the singularity."
That's just a victim of America. Move to a country that pays for healthcare.
I'm not poor like my father was. His problems are not mine. America is a quantifiably better place with regard to someone being born poor ending up in my current income bracket.
Instead, I work to make sure other people don't have to go through what he did.
No need to abandon what works better for 80% of the population than anywhere else in the world, just because it doesn't work for 20% of it. You fix what's wrong for the 20%.
Sure, we're backsliding right now. But I'm quite certain that will not be long-lived.
Friction is a drag.