Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Buying is often cheaper (Score 2) 584

It is a poor investment if you need to sell in less than 5-6 years

Absolutely. And I'd second taking a hard look at the "math," as well as including a lot of unforeseen potential costs of home ownership if you're not planning on staying long-term.

Most people just do comparisons of rent vs. mortgage+taxes or whatever. There's often a lot more to home ownership. There are transaction fees both to buy and sell, agent fees if you use one, and then you have the real question of how much maintenance really can be.

If it's a newer home, and you're lucky, maintenance may not be a major cost. If you're looking at an older home and you're unlucky, you may find really expensive stuff once you start tearing open a wall or a floor just to solve what seems to be a "minor issue" in your inspection before buying.

I bought a home several years back in one of the best neighborhoods in a medium-size city. We specifically looked for a home in a neighborhood that would maintain value in case we needed to sell. Five years later, we needed to sell because of a move. And we did sell -- and relatively quickly, and we got nearly 15% above our buying cost, which sounds good, right?

Except over the five years there we had to put on a new roof (something we knew going into buying it), then we ended up doing a new HVAC system, complete remodel of a smaller bathroom, and a lot of other miscellaneous things just minimally needed to get the house ready for sale again. In the end, yes, we did end up losing less money over five years than we would have if we had been paying rent for a house of that size in that neighborhood... but not by a huge amount. If we had been renting an apartment or smaller house or something in a slightly less nice neighborhood, we undoubtedly would have saved money over owning for five years, taking into account maintenance, improvements, transaction fees, etc. And this was for an older home that overall was in decent condition and had already been updated significantly by previous owners.

Over decades of ownership, you will likely build equity and can amortize the costs of maintenance and upkeep over time. But if you're not reasonably certain you'll be living somewhere for at least a decade or so, I'd seriously crunch the numbers and take into account potential unexpected costs if you'd have to resell.

Comment Re:Kids (Score 1) 147

Because some people like their kids? Some people enjoy spending time with them, teaching them things? Because some people -- dare I say -- LOVE their kids?

When i was younger, I couldn't imagine having kids. Now I can't imagine life had I not.

Many males (and even many females) don't enjoy their time raising kids or being with small children in general. Many others do. My previous post wasn't saying that kids are a negative experience, merely pointing out that one has a lot less flexibility about choices for one's weekend activities when they are present. But people choose constraints in their lives all the time for overall benefit. Some days you may not want to play on that sports team you signed up for, or go to the soup kitchen you volunteered for -- but overall, the days when you feel good about participating in something that structures and constrains your time outweigh the times you don't want to do it. Same thing with kids for many people -- there are lots of negatives depending on your perspective, but many would say the positives outweigh the negatives.

Comment Re:Kids (Score 4, Insightful) 147

That, or you have kids.

Definitely, this.

Weekends are a completely different thing with small children. From birth until early teens, you can likely expect to lose a lot of weekend time to activities related to childcare (playing with them, cleaning up after them, ferrying them to various activities, etc.). Combine that with basic stuff you HAVE to do -- like cleaning up the house in general, home maintenance, etc. -- and weekends are often gone.

Well, unless you want to be that dad who spends every weekend in his study with his stamp collection or building a ship in a bottle and yelling at his little kids to "go away" when they bother him. And you'll need a cooperative spouse (or hire a nanny).

Sure, you can incorporate kids (even little kids once they're at least toddlers) in a lot of weekend maintenance and such, but be prepared for everything to take twice as long. Once they get older, you can often incorporate them into a hobby like woodworking or some other craft activity, but that may or may not be as satisfying as devoting your own time to honing your own skills.

It's great and all to talk about "self actualization" on the weekends, and I did a lot of that in my spare time on the weekends in my 20s. Then "life happened." Once kids hit their teens, you may be able to reclaim more time for your weekend leisure. But a lot of people spend many years of their adult lives with lots of weekend responsibilities they can't get out of. You can't really blame them for taking the few hours of "downtime" they end up with and sitting in front of the TV or whatever.

Comment Re:Can it interpret a sonnet? (Score 1) 48

Your chatbot example is full to references to human perception ("nobody wants to be compared to a winter's day"), that only a human can have. You not only want intelligent computers, but ones that have a very deep understanding of what means being human.

First of all, it's not "my" example -- it was Alan Turing's. He was a founder of the entire field of AI. If you want to criticize him, fine -- but this was a standard he was willing to accept.

Second, you're getting lost in the details. The point isn't about understanding sonnets, it's about understanding, period. The conversation could have been about black holes or traffic patterns or birds on a beach.

An artificial "general intelligence" has been a goal of many AI promoters since the beginning. With such a general intelligence, it should be trivial to have it train in any number of skills or disciplines and answer questions about them.

I should remind you that there are real humans that have trouble with this. Some would reply such things as "Because I want to", and "F*ck off". Would you not consider those "intelligent" (in the AI sense)?

Are you asking me whether such replies are "intelligent" replies? My reply to that would like be "no," such replies do NOT demonstrate intelligence. It doesn't mean that the beings who make them are unintelligent, but such responses alone would not demonstrate intelligence or understanding. But I would assume a cooperative and non-mentally ill human would be able to talk about SOMETHING on SOME subject in such a manner to demonstrate understanding of something. Lots of people seem to think the goal of the Turing test is to simulate human behavior -- and if that were true, some chatbots might seem to have done well already. Your examples of non-responsive answers are perfectly "human" behavior.

But the test IS NOT a test of simulating behavior. Turing designed this as a test for intelligence and assumed that the participating parties (including the interrogator, the computer, and the other human) would be acting in good faith. Moreover, if the test was just to determine which was a computer, the interrogator could simply ask, "Are you a computer?" But that would provide no insight into intelligence.

The dialogue he gave illustrates understanding of concepts. Those concepts need not be predicated on human experience. They could be something completely different. But the responses require something MORE than basic "parroting" or canned replies. They require something more than a basic algorithm to solve a specific problem. They require a system that can learn and then demonstrate its understanding of that learning when asked.

Perhaps there are other kinds of "machine intelligence" that wouldn't be able to do that. But I submit that if we are actually able to simulate a "general intelligence" even on the level of a dog, it would likely be trivial to then expand it or train it to answer questions somewhat like what's proposed by Turing here.

Comment Re:This has been predicted forever (Score 4, Insightful) 472

Indeed, in 1930 John Maynard Keynes famously predicted that by 2030 we'd by working 15 hours days.

Keynes predicted that this would be accomplished through a 4- to 8-fold increase in worker productivity. Well, we're basically on track to the 8-fold productivity goal by 2030, but everyone is still working 40-hour weeks. Why?

It's easy to say (as some posts here already have) that from a practical standpoint our cost of living has increased. if we all wanted the kind of lifestyle from 1930s tech, we'd likely to be able to survive on a lot less money.

But that's a facile argument: even if someone wanted to live a 1930s lifestyle, how many employers really want someone who will only work 15 hours/week? Sure, there are plenty of part-time jobs, but they're generally minimum wage or not much higher. Unless you're a senior person who can dictate your own hours or have a long-established career that allows you to "consult" for only 15 hours/week, it's not really feasible in American culture to even make that choice.

So where did the productivity go? That's the real question. Keynes assumed that the profits from the excess productivity would be distributed throughout the workforce, thereby making it feasible for workers to gradually reduce the worktime from 40 hours to 15 hours each week. Instead, the vast majority of the excess productivity profits have gone to benefit the owners and executives of companies. And to live a normal "middle-class" lifestyle of the 2010s, one still has to work 40 hours/week (or more). CEOs in the mid-1900s made perhaps 20 times the average worker salary. Today they make over 300 times the average worker salary.

So, no offense to Jack Ma, but what exactly does he think will happen in the next 30 years that will break the trend of excess productivity and profits going to the upper classes, rather than being distributed more evenly and allowing less worktime?

Comment Re:It's about the money (Score 2) 145

Let's not forget maturity and life experience. Guys in the 20s (particularly early 20s) frankly still tend to a lot of stupid stuff. No, I'm not intending to insult any young people around here, and I'll fully cop to this myself. But it's the reason why insurance rates, for example, are so high for young men -- they get into more accidents because they do riskier things and simply aren't as mature.

Eventually life experience kicks in -- for some later than others. By the time you're a little older, you also have had time to think about kids and maybe think about imparting your "wisdom" to them a bit. If you're a 20-year-old guy with a baby, you're just running around trying to make the damn thing stop screaming. If you're a 35-year-old guy with a kid, you're likely thinking about all the stupid crap you did when you were 20 and how to not do that with your kid.

Also, let's face it -- older guys are less physical. If you're a young guy, you're probably out still playing stuff on the fields with your friends. If you're a little older, you may not be doing that as much. So what do you choose to do with your kids? Are you out there playing football and soccer every day, or are you more likely to spend some days doing something more intellectual with your kid indoors?

Just generalizing wildly here, but there's a lot of possible factors...

Comment Re:Two Things (Score 1) 516

And if nothing changes, then what? Come to a stop in the middle of the road?

Throw on the flashers and maybe? It's obviously not a good solution, but when you create a thing and call it "autopilot" you have to figure out how to handle such a situation.

I mean, let's think about this. As you point out this equipment wasn't yet "smart enough" to pull over. The driver appears to be non-responsive. What is it supposed to do? The driver could be asleep. He could have had a heart attack. Should the car keep driving until it runs into something or runs out of gas?

I frankly don't know. But I don't think you should install something called "autopilot" into a car until you've figured out what the car should do when the driver is AWOL.

Comment Re:Global warming. (Score 4, Interesting) 286

This is one effect of global warming no one foresaw.

Uh, it's actually a pretty well-known issue. Lots of flights in the Middle East tend to be scheduled at night or in cooler parts of the day to avoid such problems. Larger planes with more powerful engines can often cope with higher temperatures, but it's a problem for less powerful planes that can't accelerate enough to get off the ground with a short runway.

It's a known issue. But so far not a common-enough one to extend runways or do expensive plane redesigns.

Comment Re:I can summarize (Score 1) 48

Let's clear up a few things here. First, I sort of agree with you broadly: a field defines its own terms. When people in the field of AI talk about AI without qualification, they often mean "weak AI." That's true.

There are just a few elements of GP's objections, though, that make your response a bit overbearing. First, this is explicitly discussing an article on "AI Progress." Let's be clear that from the beginning AI researchers have often had some sort of "strong AI" as a long-term goal. In recent years, it seems some researchers have sort of abandoned that or tried to claim that "machine intelligence" should be evaluated differently from human intelligence, thus putting off "strong AI" as a goalpost.

Nevertheless, you must admit that the concept of artificial general intelligence is still a target area of many researchers in the field. It's where the field got its name in the first place. That ultimate goal may have been sidestepped now because of the increasing realization that it wasn't something that could just be solved in a summer or two (as they thought back in the 1950s), but the whole reason it's called "AI" and not "adaptive algorithms" or some other more generic term is because of this concept of general intelligence as a goal.

So, when an article is about "progress" in AI, it's not unreasonable to bring up how close or far we may be from that original goal of the field.

Lastly, it's important to note that while a field gets to define its terminology, it's reasonable for laymen outside the field to point out when the terminology sounds misleading. "Artificial intelligence" is NOT like human "intelligence" as you note, nor even yet anywhere close to animal "intelligence" as much as we understand it. So why use the word "intelligence"? "Neural nets" are not anything like neurons. "Deep learning" barely resembles what we think of as "learning" for humans, certainly not "deep" in a straightforward English-language sense.

There IS a mismatch behind the implications of the nomenclature and the results so far. If laymen are confused about AI, it's partly the AI researchers' fault for choosing misleading terminology that implies a stronger connection to human intelligence than the level AI is currently at. Is it tedious and unhelpful to point that out for EVERY article on AI tech? Probably. Is it relevant to mention it on an article discussion AI "progress" or an article clearly focused on Strong AI in the future (like dystopian prophecies, etc.)? Yeah, it is.

Comment Can it interpret a sonnet? (Score 5, Informative) 48

What Alan Turing wrote in 1950 about the "imitation game":

I am sure that Professor Jefferson [a critic of AI] does not wish to adopt the extreme and solipsist point of view. Probably he would be quite willing to accept the imitation game as a test. The game (with the player B omitted) is frequently used in practice under the name of viva voce to discover whether some one really understands something or has "learnt it parrot fashion." Let us listen in to a part of such a viva voce:

Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?

Witness: It wouldn't scan.

Interrogator: How about "a winter's day," That would scan all right.

Witness: Yes, but nobody wants to be compared to a winter's day.

Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

Witness: In a way.

Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.

Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.

And so on, What would Professor Jefferson say if the sonnet-writing machine was able to answer like this in the viva voce? I do not know whether he would regard the machine as "merely artificially signalling" these answers, but if the answers were as satisfactory and sustained as in the above passage I do not think he would describe it as "an easy contrivance."

That's an example of what Alan Turing expected of the "Turing Test." And the issue isn't knowledge of sonnets or English lit here or whatever -- it's being able to parse and understand and respond reasonably to demonstrate such understanding. That was Turing's definition of AI. The kind of AI that he predicted by the year 2000 would be able to fool a skilled "interrogator" specifically trying to trip up the AI and identify the computer when an AI would be put up against a human in the "imitation game" test.

When a chatbot can do this, call me. Otherwise, all of this talk about "artificial intelligence," "deep learning," "neural networks," etc. is just fancy words for slightly more powerful statistical tools and adaptive algorithms. Maybe chaining billions of such things together could eventually lead to something that could carry on a conversation like Turing's example, but I've never encountered a chatbot with anything close to that. Most chatbots can't understand a pronoun reference to the previous sentence, let alone make abstract connections as shown in the above quotation.

Comment Re:Prediction (Score 1) 175

Along with POGs.

Pianist Occupied Governments?

No, POG is a tasty drink composed of a blend of passionfruit, orange, and guava juice, hence the name. I still recall first drinking some in college when a Hawaiian friend had a few gallons shipped to him.

Oh yeah, but some idiots stole some game and marketed it. I don't know what that's about. Drink POG. It's a lot more enjoyable than some stupid game.

Comment Re:Existence [Re:Can we stop caring about this?] (Score 1) 253

If one wants to have a new descriptive term, then it needs to be defined.

It can be defined just like other problematic acts. Legally, we have all sorts of statutes that depend on intent by someone who commits an act and perception of someone who is the target of that act.

What is assault? Punching someone is arguably "totally subjective" too. One can accidentally hit someone, in which case you lack intent. One can ask or consent to be punched (e.g., in boxing or other sparring practice, etc.), so intent to hit and cause possible harm was present, but consent was also present, so this generally can't be prosecuted legally.

But if a punch has both intent and is perceived by the target to be unwarranted and without consent, it's assault.

One can apply this to any number of potentially problematic actions which aren't legally problematic under all circumstances.

You may argue that "hate speech" may be more ambiguous, but let's not pretend that actions in general don't require interpretation. I'll agree with you that "hate speech" is NOT solely contained in the words themselves. I absolutely do NOT agree with anyone who wants to define "hate speech" simply by declaring certain words to be "out of bounds." But if there is intent to harm AND perceived harm, and both can be reasonably proven (as with assault), why can't we define such a category of speech acts?

(To be clear, I'm not generally in favor of most "hate speech" restrictions on legal speech. But that doesn't mean the category can't possibly be defined.)

Comment Re:Can we stop caring about this? (Score 1) 253

All controversial speech is political, because of human nature: we try to suppress what annoys, disgusts, or offends us.

Okay, but that requires a definition of "political" that's likely a lot more broad than most people think of. All controversial speech is SOCIAL -- I'll grant you that. But yelling an insult at your neighbor is not (by most people's definitions) a "political" act. It may be offensive to your neighbor and to other people who hear it, but that doesn't automatically make it "political."

Political, according to standard dictionary definitions has something to do with government. Insulting your neighbor (even in a rather offensive way) because you don't like the color of his shirt has nothing to do with the government.

Slashdot Top Deals

The first time, it's a KLUDGE! The second, a trick. Later, it's a well-established technique! -- Mike Broido, Intermetrics

Working...