That's what I assumed as well. Buy Now Pay Later loans like this have a long history of being predatory. So I took a look at what it would cost to accept Klarna (as an example) as a merchant. The reality is that they have transaction fees that are very similar to credit cards. In other words, these companies do not need to rely on missed payments to make a profit.
These companies are apparently setting themselves up to replace traditional credit card payment systems, which suits me right down to the ground.
The difference is that it is much easier to get a Klarna account, and it isn't (yet) as widely available.
YouTube needs to be regulated as a telecom provider. As such, it must be prevented from discriminating against content for any reason other than it being illegal.
Sure, if you want it to become an unusable cesspool. If you just hate YouTube and want to kill it, this is the way. Same with any other site that hosts user-provided content -- if it's popular and unmoderated it will become a hellscape in short order.
I felt the same way at first. Traditional BNPL schemes were very predatory. However, Klarna (and others) appear to be playing approximately the same game as the traditional credit card processors. They charge transaction fees that are roughly the same as credit card processors, and like credit cards their customers don't pay extra if they pay their bill on time. Klarna, in particular actually appears to give customers interest free time.
The difference, for consumers, is primarily that a Klarna account is much easier to get, and it isn't universally accepted. From a merchant perspective, depending on your payment provider, you might already be able to accept Klarna, and it appears that it mostly works like a credit card. It's even possible that charge backs are less of an issue, although it does appear that transaction fees are not given back in the case of a refund.
Personally, I am all for competition when it comes to payment networks. Visa and Mastercard are both devils. More competition for them is good for all of us.
The buy-now-pay-later services being used are zero interest as long as payments are made on time, so it could just be a case of people who are living paycheck to paycheck (which indicates bad financial management more than poverty) using this to smooth out their expenses so they don't have to wait for their paycheck to be able to buy groceries. It could be a significant improvement for those who used to occasionally use payday loans (which are not zero interest). These people would be better off adjusting their spending habits to maintain a buffer of their own cash instead, but if they aren't going to do that BNPL is a better option than waiting for payday before buying food or using a payday loan service.
But obviously the only reason these by-now-pay-later services are in business is because some of their customers fail to make the zero-interest payments and end up having to pay interest, and this number is high enough to make them profitable. It would be very interesting to find out what that percentage is. People who are paying interest on regular purchases like groceries are throwing money away, which is clearly bad.
Imagine the amusement park rides
They're not that good.
We're whalers on the moon
We carry a harpoon
But there ain't no whales
So we tell tall tales
And sing our whaling tune
hope that the new vomit is marginally different
The rest of your comment is basically correct, if unnecessarily negative, but this isn't. Traditional tools like diff make it very easy to see exactly what has changed. In practice, I rely on git, staging all of the iteration's changes ("git add
I also find it's helpful to make the AI keep iterating until the code builds and passes the unit tests before I bother taking a real look at what it has done. I don't even bother to read the compiler errors or test failure messages, I just paste them in the AI chat. Once the AI has something that appears to work, then I look at it. Normally, the code is functional and correct, though it's often not structured the way I'd like. Eventually it iterates to something I think is good, though the LLMs have a tendency to over-comment, so I tend to manually delete a lot of comments while doing the final review pass.
I actually find this mode of operation to be surprisingly efficient. Not so much because it gets the code written faster but because I can get other stuff done, too, because I mostly don't mentally context switch while the AI is working and compiles and tests are running.
This mode is probably easier for people who are experienced and comfortable with doing code reviews. Looking at what the AI has done is remarkably similar to looking at the output of a competent but inexperienced programmer.
but every astronomer's shy friend. Statistics.
Making wild assumptions about the prevalence of 3-body star systems with the requisite properties, you could come up with fat error bars on the prevalence of this scenario as a Type Ia progenitor?
Journal article!
There was a cohort of grad students living in university housing at a small institution of higher learning in the Greater Los Angeles area in the foothills of the San Gabriel Mountains who stayed up that late to watch a certain TV program because they all lacked a normal social life. This television program from Canada featured members of the Toronto Second City improvisational comic troupe, and this program was shown in Los Angeles following NBC Saturday Night Live. Several of the actors went on to appear on Saturday Night Live and later in movies. It is the shared culture of those grad students, and at least one of them, who happened to be from Canada, went on to contribute scientifically to astronomy.
This program, if you can believe it, was even more out-of-the-mainstream, subversive and edgy and not ready for prime time than the Saturday Night Live of the late 1970s. Toronto was a "second city" to the Canadian cultural center of Montreal, and Canadians I have known carry a resentment that Canada is a "second country" to the U.S., and the television program, originating in a fictitious "downmarket" generic North American city named "Melonville" built heavily on those themes.
One of the sketches was called Celebrity Blow-up, which parodied the sort of TV content that could be developed at a downmarket, North American TV station, featured a pair of actors dressed in denim coveralls who spoke ungrammatically. Their guests were other comics doing character impressions of well-known Hollywood actors who were known to over-act or otherwise have a high opinion of themselves as actors and be ripe for comedic parody. Each "guest" was encouraged to "blow up" on screen, where they literally exploded, which in turn was a cheesy video special effect within the budget of a downmarket TV station originating this fictitious TV program. Lacking cultural refinement, the denim-wearing hosts would find this entertaining and yell, "he blowed-up, real good!"
Your astronomer colleagues, who just might include my Canadian friend from over 40 years ago, are excited about the prospect that a nearby recurrent nova would "blowed-up, real good!", which is as realistic as an overacting Hollywood actor vanishing in an explosion on camera, but since you are from a time, a place and a different cohort of students in graduate school, one perhaps not reliant on watching a low-budget Canadian-import TV program as a shared cultural experience, the reference doesn't have any context, for which I apologize sincerely.
What kind of code coverage are you getting from your autogenerated unit tests?
It does a pretty good job at the obvious flows, both positive and negative cases. But where coverage is inadequate you can iterate quite easily and automatically with a coverage tool. Just take the coverage tool output and feed it to the LLM. I have found that I don't even need to prompt it what to with the coverage, it understands what the tool output means and what it should do in response.
Like with the compiler and testrunner, what would really make this work well is if the AI could run the coverage tool itself so it could iterate without my interaction. With that, I could just tell it to write unit tests for a given module and give it a numeric coverage threshold it needs to meet, or to explain why the threshold can't be met.
I expect that the resulting tests would be very mechanistic, in the sense that they would aim to cover every branch but without much sense of which ones really matter and which ones don't. But maybe not. The LLM regularly surprises me with its apparent understanding not only of what code does, but of why. Regardless, review would be needed, and I'd undoubtedly want to make some changes... but I'll bet it would get me at least 75% of the way to a comprehensive test suite with minimal effort.
That was basically my suggestion. The government assume a standard deduction and basic public records and sends you estimated taxes. You can accept and pay, or file a return.
Makes sense.
For me I'd never need to do anything, every thing I do is already reported to the government and I'd suspect most americans fall into that category. Unless Fidelity isn't telling the government my capital gains.
Could be worse than that. One year I had a problem that my brokerage reported all of my gains but failed to report the cost basis. This was on a bunch of Restricted Stock Unit sales which happened automatically when the stock vested, so the actual capital gains are always very close to zero, since the sale occurs minutes after the vesting. But from the 1099-B it appeared I had 100% gains on a bunch of stock sales that approximately equal my annual salary (about half of my income is stock). Worse, taken at face value would have taxed me on that money twice, since the vesting counts as normal income and is taxable income reported on the W-2, then the sale counts as a 100% short-term capital gain.
What would happen in your scheme in such a situation is that the government's pre-filled form would show up as a massive tax bill. Assuming the taxpayer survived the resulting heart attack, they'd just have to file a return that shows the correct cost basis. So it's fine; no worse than the status quo, and better for most people.
Everyone complete paper forms for their taxes. Paper returns are harder for the IRS and cost them more. If people boycotted the expensive software options for one year and slammed the IRS with paper forms, this would be reversed post haste.
Or you could just fire most of the IRS staff and reduce their capacity that way... which the party currently in charge is already happily doing, so I'm not sure why you think reducing their capacity by burying them in paper would cause a reversal. It would just make it even easier for wealthy people with long, complicated returns to cheat outrageously, confident the IRS doesn't have the capacity to audit them. That is the GOP's goal.
"A mind is a terrible thing to have leaking out your ears." -- The League of Sadistic Telepaths