Comment Re:It's due to sample code from an AMD library (Score 1) 79
Reminds me of IBM packaging early PCs with Windows licenses when they had OS/2 and it was arguably a better OS in many respects.
Reminds me of IBM packaging early PCs with Windows licenses when they had OS/2 and it was arguably a better OS in many respects.
Having worked on MMO platforms and video games, I can tell you the misery of working 800 hours of overtime in a year to support policing/EMS dispatching and mobile apps was not as miserable as what I lived and saw as a game dev. I have subsequently worked on another platform for online gaming and felt the same way there. And phone app devs are constantly crunched brutally because of the small budgets everyone expects to spend on a phone app.
The video game industry preys on young devs and brutally treats almost all of them. I had friends went from our company when it was folding up to Ubisoft and their workload there was even more miserable and unhealthy.
Now, I tend to find $80 for a game to be too much for my wallet. That said, I don't think games are easy or can be done cheaply - content and the engine underneath takes a lot of work to get right. I just don't have the $.
That said, I wouldn't be a game dev again for any reason other than abject desperation.
Some of them have lots of pretty things to see... but have less actual useful contact than a MUD.
If they pitchforked AMD for borking itself... that'd be some amusement....
From what was seen lately, they seem to reference an 'internal storage quota issue'.
Maybe somebody accidentally tried to backup all their data and it choked their own storage network.... lol.
Or somebody let something run that wasn't well tested.
...and the smart ones use a pen to number their cards in case of misadventure with the stack before it gets into the card reader.
Though, mind you, real programmers, if you want to go back far enough, were writing machine code and peeking and poking things arounds and managing the CPUs registers, memory, interrupt handlers and that sorts of stuff the hard way (which I don't miss, much as I don't miss manually managing DMA and Interrupts on hardware). And when prototyping, sometimes you'd be burning your own ROMs or EPROMs and manually aligning the code in the ROM (or figuring out how to tell the processor to look at a non-default location to start loading code from).
And really, real programmers use dip switches.
Yes, I am a Code-a-saurus Rex... and one who feels his rendezvous with the tar pits coming....
(I kind of gave up when the university engineering program I had attended changed first year programming and mechanical drafting to shy away from teaching FORTRAN, Pascal and how to manually create shapes without fancy toolkits (to understand the basis of CAD systems). Replacing those sorts of things? How to program Excel statements and macros and how to use Word.
Not going to argue that Excel doesn't have programming or that spreadsheets don't matter, but one really should at least know some of the most commonly used programming languages if only to learn what hardware really uses underneath and how types of languages enforce paradigms in how to program.
I can't count the number of times newer graduates were having problems with some moderate complexity task because the code they wrote showed they didn't actually understand the underlying understanding of interrupts, how processes/threads/skeins/etc worked, or how memory was managed and accessed by the hardware. You get so far from it with toolkits, you can end up using the toolkit in ways that are really not great choices for efficiency or bug free code.
I don't want to go back to programming 80C196 or 8086 iASM, or even go back to C, but it's good to know you can step down to that level if you really have a critical type of app where you need performance or need to do something in a way the higher level language doesn't easily support. The cases are rare, but somebody who never studied the hardware, never studies the lower level languages.... will have one heck of a time in those instances.
Last observation:
There are languages that make some tasks easier and more efficient for any given task. However, if you are looking to higher talent, if you get the one guy who knows FancyLanguageX and he writes some cool cryptic code with it then gets hit by a car, the rest of the dev team can have their hands full trying not to bust the deployed product when they don't really get the details of FancyLanguageX's paradigm or idiosyncrasies.
In real production environments, Java, C, C++, C# and some of of the mobile languages are pretty much the goto (ignoring web front ends like LAMP setups) with some scripting (BASH, Perl, Python among them). Why? Because you can find replacement programmers if your star developer dies, gets hired away, or collapses into stress leave. The odds are someone else can reasonably pick up the mantle and move ahead.
Neat stuff is good, but if it takes a new to the task but experienced programmer a long time to puzzle his way through the detail of what's happening in a bit of code written in an uncommon tool, then the advantage may largely be lost (*exception: efficiency matters for stuff like large scaleable web services and database systems, so some less common tools may be justified there... but pay your team well and make sure to train up enough so that one car accident doesn't bork the company).
John Likes Prolog.
Prolog likes Mary.
I once chatted with the fellow who invented LISP, following up on a slashdot Q&A he did where he broke programming languages into (IIRC) more than 10 categories of language types and gave a few examples of each.
His view was that languages are tools and each has strengths and weaknesses and programmers should learn at least one from each category to understand that kind of programming tool and what it is or is not good for.
He gave me a respect for *why* LISP existed (something I did not get from my survey of languages course in first year).
He was (or perhaps still is, not sure if he's still with us) a wonderful person to talk with and generous with his perspective.
With the patent system as it is, where genetic codes and proteins can be patented, and where protection for drug profits is long and deep, there are situations like this that come up that allow unscrupulous companies to hike drug costs ridiculously like these clowns at Mylan.
Yes, many drug research efforts don't pan out. But Epinephrine has been out for a long time. Is anyone seriously going to try to tell me that $50 in 2007 became $304 in 2017? Even given the bogusly low inflation rates that are officially reported, that's insane.
This is profiteering. If the company didn't need to profiteer in 2007, why do they in 2017? No good reason methinks.
How about the definition of sole source is 'no equivalent product available at present'?
And how about you cap the rates at which drug costs can increase unless the providers can show material evidence that their costs have escalated so much?
I don't have $608 to shell out (US) for something I have to replace every 1-2 years. I'm carrying an old epi-pen that's probably not as efficacious now, but it's likely still better than no pen. I just can't afford the money to get a new one (let alone two, since protocol says you hit yourself with the first and about 30 minutes later the second if you haven't reached emergency medical care).
This should be a true generic. There should be equipment whose patents have an earlier mandatory expiry because they exist in the space called 'in the interest of public health'. I'm not suggesting these guys shouldn't have got their money back, but seems to me they are well beyond that point now.
On the other hand, this is exactly why the government or NGOs should be investing in some sorts of medical research in the public interest and making the product patents entirely open and available.
Epinephrine isn't patented. Its the injector. This seems like the kind of thing a Gates Foundation or even the Government could underwrite the development of (and may have already for Atropine and the like in prior days, if we call those syrettes an early version). Make the injector patent available and then it truly is generic because epinephrine is not patented.
The reality is that big Pharma has great lobbyists, political connections, and lawyers and the whole US patent system around biomedical issues defies any sort of common sense or rational thinking.
I hear rumours of alternatives, but I'm not sure they are available beyond the US borders. The Epipen fiasco and the price rise has hit many of us living in other countries too, but I'm not sure any alternatives exist where I live. I am going to look into that now though.
Patents should help protect innovation, but not form monopolies artificially (well, that may be other legislation that does that but that also needs looked at), should not have extensive duration, and should have clauses surrounding medical equipment that if the equipment price rises too quickly or if the provider becomes sole source, that the patent becomes licenseable by other companies for a very modest fee. At some point, the public interest has merit at least as great as profits for corporations.
I wish I had a pile of mod points. Anyone who can get Moloch, Captain Yossarian, and the reasonable use of the word golem into a post is all right in my books.
And your comment was spot on too.
Horsepuckey.
Engineer is derived from use of ingenuity. That's not germaine only to something with pistons. Nice try.
Where I worked for, our rule was: We deliver on time. We deliver major features for the iteration. Others are negotiable and can be dropped or shifted to the next. This kind of strategy allowed us to develop projects of 18-20 iterations of 3 weeks each and be major feature complete and stable at the end while having delivered working code every 3 weeks to the client so they could start seeing progress and testing and writing manuals and so forth.
On time delivery of an iteration (and repeatedly doing so) gives a customer confidence in your ability. Do this a few times then if you hit a big snag, you've got some cred in the client bank and can negotiate for a feature shift or a partial implementation in an iteration.
When you don't deliver software frequently and regularly, you can get to the end of the project and the customer can shelve the excellently functioning product for some trivial reason (I have seen it happen - insane, but customers make choices like that). Deliver early and often and the customer's issues get identified early (good communication also required) and handled.
Misuse of a tool to justify not moving forward is insanity. It's also a choice.
Our approach to that same problem would be:
You have a story that looks like 20 story points but your average velocity for the team is 13. We need to either have a slightly longer sprint (say a third week which should yield about an additional 7 points based on your 13 over 2 weeks or we need to add some manpower to increase our velocity. If those aren't feasible, then we should create two substories that each describe a portion of the work and then work on one in one iteration and one in the next.
You can't be religious about every assertion made in theoretical models of how to apply something. We built (ported) a huge system from one OS to another (N-tier, multiple host types, complex interactions within a software stack on a host and across between hosts) and of course there were some big stories at the start. So we broke them down.
We also knew that in the early sprints, until enough things were together, we couldn't do much functional testing (too many bits needed for anything to work). So we set the testing expectation moderately in the early sprints with more weight on that as enough of the structure came together to allow functional testing.
Seriously, it sounds like you guys are too rigid and too unimaginative by an order of magnitude.
Not accurate. Management usually has broken out content by major features and knows roughly how many sprints will be needed. It is quite possible to use Agile and have a view to the future, it just isn't set in stone and inflexible.
It sound like most Agile detractors (or detractors of any tool or technology) have had issues with them being put in place in a very inflexible way.
Flexibility (just enough process to be useful, not enough to be cumbersome) is generally the best approach. It's not always a clear point and it sometimes needs to be dynamic. What is known though is that if you try to treat any technology or methodology like a religion (zealotry and can't get enough), bad outcomes ensue.
I have never seen anything fill up a vacuum so fast and still suck. -- Rob Pike, on X.