Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re: effective? (Score 4, Insightful) 82

The COVID mRNA vaccines were the culmination of decades of research into genetic vaccines that could be in essence engineered to target a selected antigen without the years of trial and error that are required by the methods we have been using since the 1950s. Within days of the virus genome being published, they had a vaccine design, the months it took to get to the public were taken up with studies of the safety and effectiveness of the heretofore untested technology, ramping up production, and preparing for the distribution of a medicine that required cryogenic storage.

It would be unreasonable not to give the Trump administration credit for not mucking up this process. But the unprecedented speed of development wasnâ(TM)t due to Trump employing some kind of magical Fuhrermojo. It was a stroke good fortune that when the global pandemic epidemiologists have been worried about arrived, mRNA technology was just at the point where you could use it. Had it arrived a decade earlier the consequences would have been far worse, no matter who was president.

The lesson isnâ(TM)t that Trump is some kind of divine figure who willed a vaccine into existence, itâ(TM)s that basic research that is decades from practical application is important.

Comment Re: Talking about the weather (Score 1) 149

Sure, itâ(TM)s quite possible for two people to exchange offhand remarks about the local weather apropos of nothing, with no broader point in mind. It happens all the time, even, I suppose, right in the middle of a discussion of the impact of climate change on the very parameters they were discussing.

Comment Re:I live (Score 4, Interesting) 149

The thing to understand is we're talking about sixth tenths of a degree warming since 1990, when averaged over *the entire globe* for the *entire year*. If the change were actually distributed that way -- evenly everywhere over the whole year -- nobody would notice any change whatsoever; there would be no natural system disruption. The temperature rise would be nearly impossible to detect against the natural background variation.

That's the thinking of people who point out that the weather outside their doors is unusually cool despite global warming. And if that was what climate change models actually predicted, they'd be right. But that's not what the models predict. They predict a patchwork of some places experiencing unusual heat while others experience unusual coolness, a patchwork that is constantly shifting over time. Only when you do the massive statistical work of averaging *everywhere, all the time* out over the course of the year does it manifest unambiguously as "warming".

In the short term -- over the course of the coming decade for example, -- it's less misleading to think of the troposphere becoming more *energetic*. When you consider six tenths of a degree increase across the roughly 10^18 kg of the troposphere, that is as vast, almost unthinkable amount of energy increase. Note that this also accompanied by a *cooling* of the stratosphere. Together these produce a a series of extreme weather events, both extreme heat *and* extreme cold, that aggregated into an average increase that's meaningless as a predictor of what any location experiences at any point in time.

Comment OpenAI CEO Altman says generations vary in use (Score 2) 248

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftech.yahoo.com%2Fai%2Farti...
""Gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system," Altman said at Sequoia Capital's AI Ascent event earlier this month."

Not a surprise then that a lot of Slashdotters (who tend to be on the older side) emphasize search engine use.

Insightful video on other options for using AI:
"Most of Us Are Using AI Backwards -- Here's Why"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Takeaways
  1. Compression Trap: We default to using AI to shrink information--summaries, bullet points, stakeholder briefs--missing opportunities for deeper insight.
  2. Optimize Brain Time: The real question isn't "How fast can I read?" but "When should I slow down and let ideas ferment?" AI can be tuned to extend, not shorten, our cognitive dwell-time on critical topics.
  3. Conversational Partnership: Advanced voice mode's give-and-take cadence keeps ideas flowing, acting like a patient therapist and sharp colleague rolled into one.
  4. Multi-Model Workflow: I pair models deliberately--4o voice for live riffing, O3 for distilling a thesis, Opus 4 for conceptual sculpting--to match each cognitive phase.
  5. Naming the Work: Speaking thoughts aloud while an AI listens helps "name" the terrain of a project, turning vague hunches into navigable coordinates.
  6. AI as Expander: Used thoughtfully, AI doesn't replace brainpower; it amplifies it, transforming routine tooling into a force-multiplier for deep thinking."

Other interesting AI Videos:

"Godfather of AI: I Tried to Warn Them, But We've Already Lost Control! Geoffrey Hinton"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

"Is AI Apocalypse Inevitable? - Tristan Harris"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

See also an essay by Maggie Appleton: "The Dark Forest and Generative AI: Proving you're a human on a web flooded with generative AI content"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmaggieappleton.com%2Fai-...

Talk & video version: "The Expanding Dark Forest and Generative AI: An exploration of the problems and possible futures of flooding the web with generative AI content"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmaggieappleton.com%2Ffor...

On what Star Trek in the 1960s had to say about AI and becoming "Captain Dunsel" and also the risk of AI reflecting its obsessive & flawed creators,:
"The Ultimate Computer // Star Trek: The Original Series Reaction // Season 2"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

An insightful Substack post (which I replied to) on that theme of flawed creators making a flawed creation, mentioning the story of the Krell from Forbidden Planet:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fsubstack.com%2F%40bernsh%2Fn...
"In Forbidden Planet the Krell built a machine of unimaginable power, designed to materialize thought itself -- but were ultimately destroyed because it also materialized their unconscious, primitive, destructive impulses, which they themselves did not fully understand or control. ..."

They also mention other stories there (perhaps generated from an LLM), including The Garden of Eden, Pandoraâ(TM)s Box, The Tower of Babel, The Icarus Myth, and Prometheus. I my response I mentioned some other sci-fi stories that touch on related themes for that and my sig on the irony of tools of abundance misused by scarcity-minded people.

Inspired by that first video on using AI to help refine ideas, a few days ago I used llama3.1 to discuss an essay I wrote related to my sig ( "Recognizing irony is key to transcending militarism" https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni... ). The most surprisingly useful part was when I asked the LLM to list authors who had written related things (most of whom I knew of), and then, as a follow-up, what those authors might have thought about the essay I wrote. The LLM included for each author what parts of the essay they would have praised and also what was missing from the essay from that author's perspective.

Comment The Big Crunch by David Goodstein (1994) (Score 3, Interesting) 78

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fweb.archive.org%2Fweb%2F20...
"The period 1950-1970 was a true golden age for American science. Young Ph.D's could choose among excellent jobs, and anyone with a decent scientific idea could be sure of getting funds to pursue it. The impressive successes of scientific projects during the Second World War had paved the way for the federal government to assume responsibility for the support of basic research. Moreover, much of the rest of the world was still crippled by the after-effects of the war. At the same time, the G.I. Bill of Rights sent a whole generation back to college transforming the United States from a nation of elite higher education to a nation of mass higher education. ...
        By now, in the 1990's, the situation has changed dramatically. With the Cold War over, National Security is rapidly losing its appeal as a means of generating support for scientific research. There are those who argue that research is essential for our economic future, but the managers of the economy know better. The great corporations have decided that central research laboratories were not such a good idea after all. Many of the national laboratories have lost their missions and have not found new ones. The economy has gradually transformed from manufacturing to service, and service industries like banking and insurance don't support much scientific research. To make matters worse, the country is almost 5 trillion dollars in debt, and scientific research is among the few items of discretionary spending left in the national budget. There is much wringing of hands about impending shortages of trained scientific talent to ensure the Nation's future competitiveness, especially since by now other countries have been restored to economic and scientific vigor, but in fact, jobs are scarce for recent graduates. Finally, it should be clear by now that with more than half the kids in America already going to college, academic expansion is finished forever.
        Actually, during the period since 1970, the expansion of American science has not stopped altogether. Federal funding of scientific research, in inflation-corrected dollars, doubled during that period, and by no coincidence at all, the number of academic researchers has also doubled. Such a controlled rate of growth (controlled only by the available funding, to be sure) is not, however, consistent with the lifestyle that academic researchers have evolved. The average American professor in a research university turns out about 15 Ph.D students in the course of a career. In a stable, steady-state world of science, only one of those 15 can go on to become another professor in a research university. In a steady-state world, it is mathematically obvious that the professor's only reproductive role is to produce one professor for the next generation. But the American Ph.D is basically training to become a research professor. It didn't take long for American students to catch on to what was happening. The number of the best American students who decided to go to graduate school started to decline around 1970, and it has been declining ever since. ...
        Let me finish by summarizing what I've been trying to tell you. We stand at an historic juncture in the history of science. The long era of exponential expansion ended decades ago, but we have not yet reconciled ourselves to that fact. The present social structure of science, by which I mean institutions, education, funding, publications and so on all evolved during the period of exponential expansion, before The Big Crunch. They are not suited to the unknown future we face. Today's scientific leaders, in the universities, government, industry and the scientific societies are mostly people who came of age during the golden era, 1950 - 1970. I am myself part of that generation. We think those were normal times and expect them to return. But we are wrong. Nothing like it will ever happen again. It is by no means certain that science will even survive, much less flourish, in the difficult times we face. Before it can survive, those of us who have gained so much from the era of scientific elites and scientific illiterates must learn to face reality, and admit that those days are gone forever. I think we have our work cut out for us."

Comment Re: Biodiesel [Re:Synthetic fuels] (Score 1) 363

Sure but the advantage of crops is you can easily scale your solar collectors by planting more acres. There are soybean farms with a half million acres out there that would produce significant amounts of biodiesel if used for that purpose. Now algae is a lot more efficient in a physics sense, but an equivalent algae facility would be on the order of 100,000 acres. The water requirements and environmental impacts of open algae pools would be almost unimaginable. Solar powered bioreactors would increase yields and minimize environmental costs, at enormous financial costs, although possibly this would be offset by economies of scale.

Either way a facility that produces economically significant amounts of algae biodiesel would be an engineering megaproject with higher capital and operating costs than crop based biodiesel, but an algae based energy economy is a cool idea for sci fi worldbuilding. In reality where only the most immediately economically profitable technologies survive, I wouldnâ(TM)t count on it being more than a niche application.

Comment Re:Fun in Austin (Score 2) 110

It isn't just fanboys. Tesla stock is astronomically overpriced based on the sales performance and outlook of what normal people consider its core business -- electric cars (and government credits). For investors, Tesla is *all* about the stuff that doesn't exist yet, like robotaxis.

Are they wrong to value Musk's promises for Tesla Motors so much? I think so, but it's a matter of opinion. If Tesla actually managed to make the advances in autonomous vehicle technology to make a real robotaxi service viable, I'd applaud that. But I suspect if Musk succeeds in creating a successful robotaxi business, Tesla will move on to focus on something other than that. Tesla for investors isn't about what it is doing now, it's about not missing out on the next big thing.

Comment Re:Biodiesel [Re:Synthetic fuels] (Score 1) 363

The real problem with biodiesel would be its impact on agriculture and food prices. Ethanol for fuel has driven global corn prices up, which is good for farmers but bad in places like Mexico where corn is a staple crop. Leaving aside the wildcat homebrewer types who collect restaurant waste to make biodiesel, the most suitable virgin feedstocks for biodiesel on an industrial scale are all food crops.

As for its technical shortcomings, if it even makes any economic sense at all then that's a problem for the chemists and chemical engineers. I suspect biodiesel for its potential environmental benefits wouldn't attract serious investment without some kind of mandate, which would be a really bad thing if you're making it from food crops like oil seeds or soybeans.

Comment Re:How is a 10% reduction in traffic a success? (Score 2) 111

I wonder at what rate they'll need to increase the pricing in order to maintain it. Ironically improved traffic may make driving more desirable.

They will have to increase the price eventually as demand for transport overall rises. The point of the pricing is to deter driving enough that the street network operates within its capacity limits; if driving becomes more desirable than status quo ante, they aren't charging enough and will have to raise prices to keep demand manageable.

Think of it this way: either way, traffic will reach some equilibrium. The question is, what is the limiting factor? If using the road is free, then the limiting factor is traffic congestion. If you widen some congested streets, the limiting factor is *still* congestion, so eventually a new equilibrium is found which features traffic jams with even more cars.

The only way to build your way out of this limit, is to add *so* much capacity to the street network that it far outstrips any conceivable demand. This works in a number of US cities, but they're small and have an extensive grid-based street network with few natural barriers like rivers. There is simply no way to retrofit such a street architecture into a city of 8.5 million people where land costs six million dollars an acre.

So imposing use fees is really is the only way to alleviate traffic for a major city like New York or London. This raises economic fairness issues, for sure, but if you want fairness, you can have everyone suffer, or you can provide everyone with better transportation alternatives, but not necessarily the same ones. Yes, the wealthy will be subsidizing the poor, but they themselves will also get rewards well worth the price.

Comment "Is AI Apocalypse Inevitable? - Tristan Harris" (Score 1) 77

Another video echoing the point on the risks of AI combined with "bad" capitalism: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
        "(8:54) So just to summarize: We're currently releasing the most powerful inscrutible uncontrollable technology that humanity has ever invented that's already demonstrating the behaviors of self-preservation and deception that we thought only existed in sci-fi movies. We're releasing it faster than we've released any other technology in history -- and under the maximum incentive to cut corners on safety. And we're doing this because we think it will lead to utopia? Now there's a word for what we're doing right now -- which is this is insane. This situation is insane.
        Now, notice what you're feeling right now. Do do you feel comfortable with this outcome? But do you think that if you're someone who's in China or in France or the Middle East or you're part of building AI and you're exposed to the same set of facts about the recklessness of this current race, do you think you would feel differently? There's a universal human experience to the thing hat's being threatened by the way we're currently rolling out this profound technology into society. So, if this is crazy why are we doing it? Because people believe it's inevitable. [Same argument for any arms race.] But just think for a second. Is the current way that we're rolling out AI actually inevitable like if if literally no one on Earth wanted this to happen would the laws of physics force AI out into society? There's a critical difference between believing it's inevitable which creates a self-fulfilling prophecy and leads people to being fatalistic and surrendering to this bad outcome -- versus believing it's really difficult to imagine how we would do something really different. But "it's difficult" opens up a whole new space of options and choice and possibility than simply believing "it's inevitable" which is a thought-terminating cliche. And so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability. We can't do something else if we believe it's inevitable.
        Okay, so what would it take to choose another path? Well, I think it would take two fundamental things. The first is that we have to agree that the current path is unacceptable. And the second is that we have to commit to finding another path -- but under different incentives that offer more discernment, foresight, and where power is matched with responsibility. So, imagine if the whole world had this shared understanding about the insanity, how differently we might approach this problem..."

He also makes the point that we ignored the downsides of social media and so got the current problematical situations related to it -- and so do we really want to do the same with way-more-risky AI? He calls for "global clarity" on AI issues. He provides examples from nuclear, biotech, and ozone on how collective understanding and then collective action made a difference to manage risks.

Tristan Harris is associated with "The Center For Humane Technology" (of which I joined their mailing list while back):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.humanetech.com%2F
"Articulating challenges.
Identifying interventions.
Empowering humanity."

Just saw this yesterday on former President Obama talking about concerns about AI not being hyped (mostly about economic disruption) and also how cooperation between people is the biggest issue:
"ORIGINAL FULL CONVERSATION: An Evening with President Barack Obama"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
        "(31:43) The changes I just described are accelerating. If you ask me right now the thing that is not talked about enough but is coming to your neighborhood faster than you think, this AI revolution is not made up; it's not overhyped. ... I was talking to some people backstage who are uh associated with businesses uh here in the Hartford community. Uh, I guarantee you you're going to start seeing shifts in white collar work as a consequence of uh what these new AI models can do. And so that's going to be more disruption. And it's going to speed up. Which is why uh, one of the things I discovered as president is most of the problems we face are not simply technical problems. If we want to solve climate change, uh we probably do need some new battery technologies and we need to make progress in terms of getting to zero emission carbons. But, if we were organized right now we could reduce our emissions by 30% with existing technologies. It'd be a big deal. But getting people organized to do that is hard. Most of the problems we have, have to do with how do we cooperate and work together, uh not you know ... do we have a ten point plan or the absence of it."

I would respectfully build on what President Obama said by adding that a major reason why getting people to cooperate about such technology is because we need to shift our perspective as suggested with my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

I said much the same in the open letter to Michelle Obama from 2011:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fopen-le...

One thing I would add to such a letter now is a mention of Dialogue Mapping using IBIS (perhaps even AI-assisted) to help people cooperate on solving "wicked" problems through visualizing the questions, options, and supporting pros and cons in their conversations:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fcognitive-science.info...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fmedia%2Fl...

Here is one example of some people working in that general area to support human collaboration on "wicked problems" (there are others, but I am conversing with related people at the moment): "The Sensemaking Scenius" (as one way to help get the "global clarity" that Tristan Harris and, indirectly, President Obama calls for):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.scenius.space%2F
        "The internet gods blessed us with an abundance of information & connectivity -- and in the process, boiled our brains. We're lost in a swirl of irrelevancy, trading our attention, at too low a price. Technology has destroyed our collective sensemaking. It's time to rebuild our sanity. But how?
Introducing The Sensemaking Scenius, a community of practice for digital builders, researchers, artists & activists who share a vision of a regenerative intentional & meaningful internet."

Something related to that by me from 2011:
http://barcamp.org/w/page/4722...
        "This workshop was led by Paul Fernhout on the theme of tools for collective sensemaking and civic engagement."

I can hope for a convergence of these AI concerns, these sorts of collaborative tools, and civic engagement.

Bucky Fuller talked about being a "trim tab", a smaller rudder on a big rudder for a ship, where the trim tab slowly turns the bigger rudder which ultimately turns the ship. Perhaps civic groups can also be "trim tabs", as in: "Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has. (Margaret Mead)"

To circle back to the original article on what Facebook is doing, frankly, if there are some people at Facebook who really care about the future of humanity more than the next quarter's profits, this is the kind of work they could be doing related to "Artificial Super Intelligence". They could use add tools for Dialogue Mapping to Facebook's platform (like with IBIS or similar, perhaps supported by AI) to help people understand the risks and opportunities of AI and to support related social collaboration towards workable solutions -- rather than just rushing ahead to create ASI for some perceived short-term economic advantage. And this sort of collaboration-enhancing work is the kind of thing Facebook should be paying 100 million dollar signing bonuses for if such bonuses make any sense.

I quoted President Carter in that open letter, and the sentiment is as relevant about AI as it was then about energy:
        http://www.pbs.org/wgbh/americ...
        "We are at a turning point in our history. There are two paths to choose. One is a path I've warned about tonight, the path that leads to fragmentation and self-interest. Down that road lies a mistaken idea of freedom, the right to grasp for ourselves some advantage over others. That path would be one of constant conflict between narrow interests ending in chaos and immobility. It is a certain route to failure. All the traditions of our past, all the lessons of our heritage, all the promises of our future point to another path, the path of common purpose and the restoration of American values. That path leads to true freedom for our nation and ourselves. We can take the first steps down that path as we begin to solve our energy [or AI] problem."

Comment Transcending to a happy singularity? (Score 1) 77

You wrote: "As useful as capitalism has proved to be, its motivations are primitive and short sighted. How AI is being punted is another example of "bad" capitalism. Bad capitalism has helped wreck the planet more than anything else."

Geoffrey Hinton, as a self-professed socialist, makes a version of your point in the interview previously linked to.

And your point is ultimately the key insight emerging from our discussion, as I reflect on it. AGI or especially ASI may indeed take over someday to humanity's detriment, but that is likely in the future if it happens. What is the biggest threat right now to most humans is other humans developing and using AGI or ASI within a capitalist framework.

I wrote to Ray Kurzweil about something similar back in 2007, responding to a point in one of his books where he was suggesting the best way to quickly get AI was for competitive US corporations to create it. I suggested essentially that AI produced through competition is more likely to have a bad outcome for humanity than AI produced through cooperation. I'd suggest the points there could be said about several current AI entrepreneurs. Someone I sent it to put it up here, and I will include a key excerpt below:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fheybryan.org%2Ffernhout%2F...

That said, other systems like, say, in the USSR have their own legacies of, say, environmental destruction and suffering (as with Chernobyl). So Capitalism has not cornered the market on poor risk management -- even though the ideal of any capitalist enterprise is to privatize gains while socializing risks and costs.

Here is one book of many I've collected on improving organizations (maybe of tangential relevance if you are thinking about organization improvement for your project):
"Reinventing Organizations: A Guide to Creating Organizations Inspired by the Next Stage of Human Consciousness" by Frédéric Laloux
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fgithub.com%2Fpdfernhout%2F...
"Reinventing Organizations is a radical book, in the sense that radical means getting to the root of a problem. Drawing on works by other writers about organizations and human development, Frédéric Laloux paints a historical picture of moving through different stages of organizational development which he labels with colors. These stages are:
* Red (impulsive, gang-like, alpha male)
* Amber (conformist, pyramidal command and control)
* Orange (achievement, mechanistic, scarcity-assuming, cross-functional communications across a pyramid)
* Green (pluralistic, inverted pyramid with servant leadership and empowered front line)
* Teal (evolutionary, organic, abundance-assuming, self-actualized, self-organizing, purpose-driven)."

Maybe we as a society need to become Teal overall -- or at least Green -- if we are to prosper with AI?

Good talking to you too, same.

--------- From book-review-style email to Ray Kurzweil in 2007

To grossly simplify a complex subject, the elite political and economic culture Kurzweil finds himself in as a success in the USA now centers around maintaining an empire through military preparedness and preventive first strikes, coupled with a strong police state to protect accumulated wealth of the financially obese. This culture supports market driven approaches to supporting the innovations needed to support this militarily-driven police-state-trending economy, where entrepreneurs are kept on very short leashes, where consumers are dumbed down via compulsory schooling, and where dissent is easily managed by removing profitable employment opportunities from dissenters, leading to self-censorship. Kurzweil is a person now embedded in the culture of the upper crust economically of the USA's military and economic leadership. So, one might expect Kurzweil to write from that perspective, and he does. His solutions to problems the singularity pose reflect all these trends -- from promoting first strike use of nanobots, to design and implementation facilitated through greed, to widespread one-way surveillance of the populace by a controlling elite.

But the biggest problem with the book _The Singularity Is Near: When Humans Transcend Biology_ is Kurzweil seems unaware that he is doing so. He takes all those things as given, like a fish ignoring water, ignoring the substance of authors like Zinn, Chomsky, Domhoff, Gatto, Holt, and so on. And that shows a lack of self-reflection on the part of the book's author. And it is is a lack of self-reflection which seems dangerously reckless for a person of Kurzweil's power (financially, politically, intellectually, and persuasively). Of course, the same thing could be said of many other leaders in the USA, so that he is not alone there. But one expects more from someone like Ray Kurzweil for some reason, given his incredible intelligence. With great power comes great responsibility, and one of those responsibilities is to be reasonably self-aware of ones own history and biases and limitations. He has not yet joined the small but growing camp of the elite who realize that accompanying the phase change the information age is bringing on must be a phase change in power relationships, if anyone is to survive and prosper. And ultimately, that means not a move to new ways of being human, but instead a return to old ways of being human, as I shall illustrate below drawing on work by Marshall Sahlins. ...

One of the biggest problems as a result is Kurzweil's view of human history as incremental and continual "progress". He ignores how our society has gone through several phase changes in response to continuing human evolution and increasing population densities: the development of fire and language and tool-building, the rise of militaristic agricultural bureaucracies, the rise of industrial empires, and now the rise of the information age. Each has posed unique difficulties, and the immediate result of the rise of militaristic agricultural bureaucracies or industrialism was most definitely a regression in standard of living for many humans at the time. For example, studies of human skeleton size, which reflect nutrition and health, show that early agriculturists were shorter than preceding hunter gathers and showed more evidence of disease and malnutrition. This is a historical experience glossed over by Kurzweil's broad exponential trend charts related to longevity which jumps from Cromagnon to industrial era. Yes, the early industrial times of Dickens in the 1800s were awful, but that does not mean the preceding times were even worse -- they might well have been better in many ways. This is a serious logical error in Kurzweil's premises leading to logical problems in his subsequent analysis. It is not surprising he makes this mistake, as the elite in the USA he is part of finds this fact convenient to ignore, as it would threaten the whole set of justifications related to "progress" woven around itself to justify a certain unequal distribution of wealth. It is part of the convenient ignorance of the implications that, say, the Enclosure acts in England drove the people from the land and farms that sustained them, forcing them into deadly factory work against their will -- an example of industrialization creating the very poverty Kurzweil claims it will alleviate.

As Marshall Sahlins shows, for most of history, humans lived in a gift economy based on abundance. And within that economy, for most food or goods people families or tribes were mainly self-reliant, drawing from an abundant nature they had mostly tamed. Naturally there were many tribes with different policies, so it is hard to completely generalize on this topic -- but certainly some did show these basic common traits of that lifestyle. Only in the last few thousand years did agriculture and bureaucracy (e.g. centered in Ancient Egypt, China, and Rome) come to dominate human affairs -- but even then it was a dominance from afar and a regulation of a small part of life and time. It is only in the last few hundred years that the paradigm has shifted to specialization and an economy based on scarcity. Even most farms 200 years ago (which was where 95% of the population lived then) were self-reliant for most of their items judged by mass or calories. But clearly humans have been adapted, for most of their recent evolution, to a life of abundance and gift giving.

When you combine these factors, one can see that Kurzweil is right for most recent historical trends, with this glaring exception, but then shows an incomplete and misleading analysis of current events and future trends, because his historical analysis is incomplete and biased. ...

So, this would suggest more caution approaching a singularity. And it would suggest the ultimate folly of maintain[ing] R&D systems motivated by short term greed to develop the technology leading up to it. But it is exactly such a policy of creating copyright and patents via greed that (the so called "free market" where paradoxically nothing is free) that Kurzweil exhorts us to expand. And it is likely here where his own success most betrays him -- where the tragedy of the approach to the singularity he promotes will results from his being blinded by his very great previous economic success. If anything, the research leading up to the singularity should be done out of love and joy and humor and compassion -- with as little greed about it if possible IMHO. But somehow Kurzweil suggests the same processes that brought us the Enron collapse and war profiteering through the destruction of the cradle of civilization in Iraq are the same ones to bring humanity safely thorough the singularity. One pundit, I forget who, suggested the problem with the US cinema and TV was that there were not enough tragedies produced for it -- not enough cautionary tales to help us avert such tragic disasters from our own limitations and pride.

Kurzweil's rebuttals to critics in the last part of the book primarily focus on those who do do not believe AI can work, or those who doubt the singularity, or the potential of nanotechnology or other technologies. One may well disagree with Kurzweil on the specific details of the development of those trends, but many people beside him, including before him, have talked about the singularity and said similar things. Of the fact of an approaching singularity, there is likely little doubt it seems, even as one can quibble about dates or such. But the very nature of a singularity is that you can't peer inside it, although Kurzweil attempts to do so anyway, but without enough caveats or self-reflection. So, what Ray Kurzweil sees in the mirror of a reflective singularity is ultimately a reflection of -- Ray Kurzweil and his current political beliefs.

The important thing is to remember that Kurzweil's book is a quasi-Libertarian/Conservative view on the singularity. He mostly ignores the human aspects of joy, generosity, compassion, dancing, caring, and so on to focus on a narrow view of logical intelligence. His antidote to fear is not joy or humor -- it is more fear. He has no qualms about enslaving robots or AIs in the short term. He has no qualms about accelerating an arms race into cyberspace. He seems to have an significant fear of death (focusing a lot on immortality). The real criticisms Kurzweil needs to address are not the straw men which he attacks (many of whom are being produced by people with the same capitalist / militarist assumptions he has). It is the criticisms that come from those thinking about economies not revolving around scarcity, or those who reflect of the deeper aspects of human nature beyond greed and fear and logic, which Kurzweil needs to address. Perhaps he even needs to addres them as part of his own continued growth as an individual. To do so, he needs to intellectually, politically, and emotionally move beyond the roots that produced the very economic and political success which let his book become so popular. That is the hardest thing for any commercially successful artist or innovator to do. It is often a painful process full of risk. ...

I do not intend to vilify Kurzweil here. I think he means well. And he is courageous to talk [a]bout the singularity and think about ways to approach it to support the public good. His early work on music equipment and tools for the blind are laudable. So was his early involvement with Unitarians and social justice. But somewhere along the line perhaps his perspective has become shackled by his own economic success. To paraphrase a famous quote, perhaps it is "easier for a camel to go through the eye of a needle than a rich man to comprehend the singularity." :-) I wish him the best in wrestling with this issue in his next book.

Comment Re:I Disagree (Score 2) 73

Well, yes -- the lies and the exaggerations are a problem. But even if you *discount* the lies and exaggerations, they're not *all of the problem*.

I have no reason to believe this particular individual is a liar, so I'm inclined to entertain his argument as being offered in good faith. That doesn't mean I necessarily have to buy into it. I'm also allowed to have *degrees* of belief; while the gentleman has *a* point, that doesn't mean there aren't other points to make.

That's where I am on his point. I think he's absolutely right, that LLMs don't have to be a stepping stone to AGI to be useful. Nor do I doubt they *are* useful. But I don't think we fully understand the consequences of embracing them and replacing so many people with them. The dangers of thoughtless AI adoption arise in that very gap between what LLMs do and what a sound step toward AGI ought to do.

LLMs, as I understand them, generate plausible sounding responses to prompts; in fact with the enormous datasets they have been trained on, they sound plausible to a *superhuman* degree. The gap between "accurately reasoned" and "looks really plausible" is a big, serious gap. To be fair, *humans* do this too -- satisfy their bosses with plausible-sounding but not reasoned responses -- but the fact that these systems are better at bullshitting than humans isn't a good thing.

On top of this, the organizations developing these things aren't in the business of making the world a better place -- or if they are in that business, they'd rather not be. They're making a product, and to make that product attractive their models *clearly* strive to give the user an answer that he will find acceptable, which is also dangerous in a system that generates plausible but not-properly-reasoned responses. Most of them rather transparently flatter their users, which sets my teeth on edge, precisely because it is designed to manipulate my faith in responses which aren't necessarily defensible.

In the hands of people increasingly working in isolation from other humans with differing points of view, systems which don't actually reason but are superhumanly believable are extremely dangaerous in my opinion. LLMs may be the most potent agent of confirmation bias ever devised. Now I do think these dangers can be addressed and mitigated to some degree, but the question is, will they be in a race to capture a new and incalculably value market where decision-makers, both vendors and consumers, aren't necessarily focused on the welfare of humanity?

Slashdot Top Deals

"Trust me. I know what I'm doing." -- Sledge Hammer

Working...