Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment C++ isn't fun, Rust isn't a must. (Score 2) 160

An issue with C++ is how confidently people defend it's shortfalls. Listen, C++ has a lot of perks, but it has downsides too. C++'s memory issues are unpleasant to deal with, and it makes writing & debugging code quite annoying. I was working on a project a while back where I kept running into errors that required debug time. I'd guesstimate 90% of my time was chasing down weird memory issues, some of which didn't even throw an error and resulted in logic bugs instead. Granted, that might be because I'm an idiot. However, after I moved to rust, that debug time dropped to around 20%. With that being said, not everything needs to be rewritten in Rust, so Rust also needs to cool it with the "if it's not Rust, it's dangerous". Rust is also filled with unsafe blocks of code where the safety guarantee is "trust me bro".

Comment Re:She's got a point (Score 1) 174

Yeah that's where you lost us. Do you really think Israel is spending billion on AI to target some dirt farming civilian?

If you built software to do X and users are meant to use it to do X, then a bunch of people you control access to are consistently using it to do M over prolonged periods of time, regardless of your intent you have built software to do M. Furthermore, if M is a critical error that defies your policies and has happened multiple times over a prolonged timespan, that is either negligence or admitting that M is acceptable within the policies that govern usage. For any decent software engineer, that is basic logic. The stakes here are higher, but the logic is the same. Hence, my original post that she's got a point.

Comment She's got a point (Score 5, Insightful) 174

People are comment about what "he" did, which demonstrates they didn't even read the article or watch the video before commenting - it was clearly a woman. The mass email she sent outlines very clearly why she did this, and it's really hard to refute anything she said. The AI work the engineers are doing are being used to transcribe & analyze communications which is being used to pick targets for what the international court has deemed genocide. Slashdot is a community of many engineers, so I'd imagine most can understand how this isn't a leap. If I was an engineer at a company building audio transcription systems, then found out that code was used to target a family and got a child killed, I'd be pretty damn furious too. That's just one example, they're using Microsoft's AI in a far broader sense to tech enable genocide. I'm really surprised more people aren't agreeing with her, but than again, it's obvious most people didn't even read the article. This isn't a case "woke virus person gone crazy", this is a legitimate stance.

Comment Reverse Penalty (Score 3) 52

DoorDash made over $8.6 billion in revenue last year. This fine is the equivalent of someone making $100k per year being caught stealing, then being fined $196. There needs to be rules that such cases involve an audit for who along the executive chain authorized this and they are each penalized 25% of their highest annual salary from the time policy was implemented to the time the case started, and 100% of any bonuses for the next 2 years automatically go to the victims. The company also has to be fined at least 5% of their gross revenue. Not saying those are the right numbers or the right approach, but overall, there needs to be real penalties for things like this.

Comment Unusable & Unreliable (Score 3, Interesting) 19

As much I've tried, I have a tough time getting on board with any product Google releases for business use. The setup is often incoherently complex whether it's signing up, setting up billing, or generating an API key. The documentation is complex and often leaves out key details. Their tutorials are like taking directions from a GPS that skips every 4th turn and gaslights you into thinking you should have just known. Any sort of coding assistance from a product they create will likely propagate this culture of absurdity into other codebases. Example 1: try signing up for Gemini and try signing up for ChatGPT. ChatGPT takes a few seconds, Gemini requires going through multiple screens, a billing system, setting up an admin org, etc. Clearly, this was designed by committees of people trying to adhere to their politics and structure rather than people who are acting on logic. Example 2: Try integrating with Google Cloud Storage.vs Amazon S3 using any language or framework. Amazon S3 is pleasantly easy. Generating credentials for Google Cloud Storage is a damn nightmare. Imagine having those same people suggesting changes to your codebase. Hell nah. Then imagine those people just discontinue the product out of nowhere. People often justify this by saying "that's because they're Google and these design patterns are meant to facilitate massive scale!". I'd get it if it wasn't for the fact that many other companies also operate at massive scales and their design & APIs make sense.

Submission + - OpenAI Researchers Warned Board of AI Breakthrough Ahead of CEO Ouster (reuters.com)

An anonymous reader writes: Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters. The previously unreported letter and AI algorithm was a key development ahead of the board's ouster of Altman, the poster child of generative AI, the two sources said. Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft in solidarity with their fired leader. The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing. Reuters was unable to review a copy of the letter.

According to one of the sources, long-time executive Mira Murati mentioned the project, called Q*, to employees on Wednesday and said that a letter was sent to the board prior to this weekend's events. After the story was published, an OpenAI spokesperson said Murati told employees what media were about to report, but she did not comment on the accuracy of the reporting. The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans. Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe. Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend. In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by superintelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Comment A testament to poor UX / UI / quality control (Score 1) 69

I've been a long time Android user, but around a year ago I became fed up. The interface was updated to be more "intuitive", and in turn it became like a glue-eating-child version of iOS. Android's navigational structure was appealing to it's users, and in trying to become more like iOS, they ended up getting rid of what their users loved and failing to deliver what iOS users loved For example, I used to be able to search for "Mobile Hotspot", and it would just show me the mobile hotspot settings page. Last I checked, searching for it brings up help articles and a bunch of stuff, and the actual mobile hotspot setting page isn't there. Why would you show me a help article for the thing I'm trying to do instead of just showing me the thing I'm trying to do? I'm willing to wager their interface was a result of "consenses through meetings" between management, marketing, and UX / UI designers. No individual could be absurd enough to make such bad decisions. They desperately need someone to spearhead their interface design with someone that understands their core users accept that these users do not use iOS for a reason, and focus on a UI that maximizes the experience they enjoy. Beyond the interface were the irritating bugs. If my Samsung phone lost service, it wouldn't just retry looking for a signal. I'd have to go into airplane mode then back out to force it to search for a signal. This was beyond irritating. Android auto would constantly disconnected from my car, except for when I wanted it to. If you ever made the mistake of enabling the driving mode on your phone, getting out of it was absurd. Try restarting your phone, your car, or burning both in a dumpster, and your phone would still stay stuck in driving mode with this irritating interface. The phone would just become absurdly slow over time with terrible battery degradation. Calling over Wi-Fi would constantly fail to actually use Wi-Fi. I get bugs are a thing, but these are core things that any reasonable QA department should catch. There's also the issues with random things just breaking. Any tech person with a family full of Android users can probably tell you about constantly having to fix random things on their parents phone's which defy any explanation for why it's broken. My family are no iPhone users. Then there's security. The Play store is basically just a malware distribution center. And the camera, oh the camera. The zoom was insane, and on a hardware level should outperform an iPhone. However, for whatever reason, I found that the pictures taken on my friend's iPhones were always better than my S21 or S22. Someone needs to invest into the digital processing of images on Androids. AI based enhancement would be a great place to start, but not sure if anyone is actually steering the ship there and they're just leading by managerial consensus / making decisions to avoid doing what's wrong (and thus rarely doing what's right). I'm an iPhone user now. I still dislike iPhones, but not as much as I hate what Android has become. I want to use a Samsung device again and would gladly fork over money if they can stop making terrible choices and implement some reasonable QA.

Slashdot Top Deals

Those who can't write, write manuals.

Working...