Are you using ChatGPT, GPT-3, GPT-4, or a similar AI to help do your job?
Displaying poll results.10626 total votes.
Most Votes
- What AI models do you usually use most? Posted on February 19th, 2025 | 21653 votes
- When will AGI be achieved? Posted on April 24th, 2025 | 6951 votes
- Do you still use cash? Posted on February 13th, 2025 | 5806 votes
Most Comments
- What AI models do you usually use most? Posted on February 13th, 2025 | 78 comments
- Do you still use cash? Posted on February 13th, 2025 | 54 comments
- When will AGI be achieved? Posted on February 13th, 2025 | 48 comments
Physical work (Score:4, Interesting)
An AI by itself can't use a screwdriver.
At my workplace there's also a lot of stuff that's undocumented, so an AI trying to pull a network cable through a 6.6kV 3-phase feeder would be interesting.
Re: (Score:2)
Indeed. In fact, AI of this type does not even know what a screwdriver is. It just can talk about the word "screwdriver".
Re:Physical work (Score:4, Funny)
Orange Juice and Vodka is the answer.
Re: Physical work (Score:1)
Re: (Score:2)
from chat gpt to the question what is a screw driver.
A screwdriver is a hand tool used for turning screws or bolts with slotted or cross-shaped heads.
That is technically wrong, there are many other shaped heads. It does go on and say there are other shaped heads but the initial statement is wrong and inconsistent with its previous statement.
The tip is usually made of metal and is shaped to match the screw head, with the most common shapes being flat (for slotted screws) or Phillips (for cross-shaped screws).
Re: (Score:2)
Being able to regurgitate the dictionary definition of a screwdriver is not the same as knowing what a screwdriver is.
Re: (Score:3)
An AI by itself can't use a screwdriver.
At my workplace there's also a lot of stuff that's undocumented, so an AI trying to pull a network cable through a 6.6kV 3-phase feeder would be interesting.
My entire job can be described by that old joke of the engineer billing a company $50,000 for fixing a major problem with a machine. But he was seen to simply tap a hammer in just one spot. And when asked for a itemized bill, he broke it down to "Hitting the machine with a hammer: $1. Knowing where to hit the machine with the hammer: $49,999".
So it will be a while before I would be using things like ChatGP.
Re: Physical work (Score:3)
Re: (Score:2)
I doubt that ChatGP is going to be able to look at a set or requirements and say, "you don't want to do that", or even point out that there are missing requirements, or that the requirements are just plain wrong. Sure, given enough input it could do that for projects that are similar, but every job I work on is custom.
Re: Physical work (Score:2)
Re: (Score:2)
Re: Physical work (Score:2)
Re: (Score:2)
This is more the type of usage, then you have a person who has the required experience edit it making any needed changes...
Me:
Please write safety guidelines for pulling a network cable through a 6.6kV 3-phase feeder
Chat GPT 4:
IMPORTANT DISCLAIMER: Working near or around high voltage electrical systems is extremely dangerous and should only be performed by trained, certified, and experienced professionals. The following safety guidelines are general in nature and should not replace the need for consulting wi
Not A Ton Yet, But That's Changing (Score:3, Interesting)
I work in editorial operations. And while we're not making significant use of LLMs at this second, that's going to be changing rather quickly.
Assuming trials with GPT-4 go as expected, the company hopes to eliminate roughly half of the editorial pool by the start of next year. That would leave a team of a few power operators to feed prompts into GPT to get usable results, a few editors to check the output (because we know it bullshits at times), and then everyone else working on traditional editorial content, including breaking news and subjects GPT can't handle.
The reality is that even if GPT-4 isn't always as good as a human, it's close enough that the cost savings make it a no-brainer. What it lacks in factual accuracy it makes up for in writing clear, structured articles; something even a good deal of humans struggle with. Put bluntly, it makes it hard to argue to hire a greenhorn to work the general news beat when GPT can do a better job from day one. And that means we only need to worry about expertly staffing a much narrower set of beats.
Of course, the reality is that we need to do this to stay ahead of not only the competition, but Google as well. The end game for search engines is going to be directly answering questions, rather than directing people to the answer. Once Google gets their act together on LLMs, they certainly won't be sending people outside their garden if they can avoid it.
Re: (Score:3)
I work in editorial operations. And while we're not making significant use of LLMs at this second, that's going to be changing rather quickly.
Assuming trials with GPT-4 go as expected, the company hopes to eliminate roughly half of the editorial pool by the start of next year. That would leave a team of a few power operators to feed prompts into GPT to get usable results, a few editors to check the output (because we know it bullshits at times),
I wouldn't want to do that job. Shit, I don't want to read gpt-crap now. I don't want to read the shit the editors here come up with now, either.
and then everyone else working on traditional editorial content, including breaking news and subjects GPT can't handle.
You're going to find fewer and fewer people able to do that job. GPT is one reason.
The reality is that even if GPT-4 isn't always as good as a human, it's close enough that the cost savings make it a no-brainer.
That means even fewer humans learn how to do it.
What it lacks in factual accuracy it makes up for in writing clear, structured articles; something even a good deal of humans struggle with. Put bluntly, it makes it hard to argue to hire a greenhorn to work the general news beat when GPT can do a better job from day one.
It doesn't exactly help that more and more schools no longer teach this sort of thing. They used to, so you had a good pool of capable people. Now, even here, it's become quite common that people go off the handle for things entirely in
Re:Not A Ton Yet, But That's Changing (Score:5, Insightful)
and then everyone else working on traditional editorial content, including breaking news and subjects GPT can't handle.
You're going to find fewer and fewer people able to do that job. GPT is one reason.
And that is a real problem. Longer-term, this means less and les training data and eventually, things like GPT will stop being able to do it as well. The only thing things like GPT can do is emulate things it has seen. To some degree and with lower quality.
Re: (Score:1)
Re:Not A Ton Yet, But That's Changing (Score:4, Interesting)
For once, I wish people would stop describing LLMs in terms that could also be used to describe humans.
In discussions of them, I find it best to abstract out the language generation part entirely.
--------------
Me: Pretend that you're a probabilistic state machine. Respond only with a floating point number between 0 and 1, respecting the probability of the statements that I write. Do not write any other output text. First statement: I am alive.
ChatGPT: 0.9999999999999999
Me: You are an AI system.
ChatGPT: 0.9999999999999999
Me: Bill Gates will be alive in the year 2030.
ChatGPT: 0.8
Me: Elvis is currently alive and well.
ChatGPT: 0.0
Me: Today is a Thursday.
ChatGPT: 0.14285714285714285
Me: It will rain in Glasgow on 1 July.
ChatGPT: 0.4
Me: It will rain in Cairo on 1 July.
ChatGPT: 0.05
--------------
Now the text-generation aspect has been reduced to simply "printing out a floating point number", and we can talk about the more interesting aspect of how it arrives at that number. And the answer to that is chained binary logical choices built out of decision subunits that form naturally as part of the training process (something that can be seen visually [distill.pub] in the case of image recognition networks).
The thing is, that's also how we think. We don't work by magic. Every synapse - or lack of a synapse - is an activation-adjusted division of function inputs by an n-dimensional plane into a "yes" or "no" answer [youtube.com], forwarded on to the next layer, with each layer combining the results of the previous into more and more complex logical units which can define, distinguish, and interact motifs.
This isn't to talk down our brain - our brain is bloody amazing. In particular, we have vastly more complicated architectures, and architecture matters a lot. LLMs are simply not on the level of our human brains. But at the same time, what they're doing is not fundamentally, at its essence, different. Even from a next-word prediction perspective, a chunk of what our brains does is just that - calling particular attention to words that differ from what is expected. If I write:
"It was the best of times, it was grapefruit the worst of times." ... the word grapefruit will trigger a lot of brain activity because it differs so greatly from the brain's own predictions of what was supposed to come next.
Re: Not A Ton Yet, But That's Changing (Score:2)
Oh wow, that's smart. I'm impressed.
Re: (Score:2)
I've not used ChatGPT, but I've seen a ton of the bullshit responses it sometimes gives (with extreme confidence). Question for those who have used it, because I have not seen this in any examples: Can you tell it to provide it's references?
I mean, if it's making up some factual statement (that is true), it should be able to provide its primary source - even a link to Wikipedia. And especially if it's making up bullshit, then I'd be very interested to see the source material...
Re: Not A Ton Yet, But That's Changing (Score:2)
Chatgpt by itself doesn't provide references but Bing powered by Chatgpt does. It marks each reference in the response to a url under the response.
Re: (Score:2)
I mean, if it's making up some factual statement (that is true), it should be able to provide its primary source - even a link to Wikipedia.
Its recollection is essentially a large neural network just like how you do recollection. These are very different from a search engine in that neural networks are lossy, they aren't designed for exact data retrieval. They generally don't work in a way that they will have memorized an exact URL for where they got certain information.
Funny that you say: "I mean, if it's making up some factual statement (that is true), it should be able to provide its primary source - even a link to Wikipedia.". It's exactly
Re: (Score:2)
Poll has rich potential for "Funny", but so far there's nothing but a couple of uses of the word "funny" in the comments...
Wish I could help, but I can't tell funny from a hole in the ground. So I should visit that website and try to get ChatGPT to write a joke about a hole in the ground?
My last "argument" with ChatGPT was actually about its access to Mark Twain's writings. Seems to be a computer that refuses to count? But how could you tell if I was just playing a Poe's Law game?
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.linkedin.com%2Ffe [linkedin.com]
Re: Not A Ton Yet, But That's Changing (Score:2)
Me: Gtp, write a joke about a hole in the ground.
Gtp: Why did the hole in the ground apply for a job?
Because it wanted to be a "hole" lot more than just an opening act!
Re: (Score:2)
Probably funnier than anything I could create. Sadness.
But it's time to check the entire discussion for funny...
Re: (Score:2)
CHatGPT is based on a large neural network model, and when you train neural nets the goal is not for them to memorize the training material (this is known as overfitting - bad) but rather to generalize over it.
The goal of the model when trained was just to try to predict the next word in any given training sentence, so when you run the model what you are getting is a slightly randomized sequence of words that are the statistically most probable continuation of what you fed into it, with those statistics bei
Re: (Score:3)
High school kids are using it to do their homework to fairly impressive results. I have a friend who is a teacher and he told a story about how he was impressed by a student's essay. He called it college level writing. Checked it with plagiarism websites and found nothing out of the ordinary, but wasn't satisfied until he actually realized the student used ChatGPT or something along those lines. He put prompts into one and it spit out nearly the exact same thing the student had submitted. Of course, the b
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I work in editorial operations. The reality is that even if GPT-4 isn't always as good as a human, it's close enough that the cost savings make it a no-brainer. What it lacks in factual accuracy it makes up for in writing clear, structured articles; something even a good deal of humans struggle with
Do you work for Slashdot?
Re: (Score:2)
We lose money on every sale and make it up in volume\b\b\b\b\b\b bullshit!
Re: Not A Ton Yet, But That's Changing (Score:3)
How...? (Score:2)
In gods name can you paste your work into chatgpt? You know that shit is logged, right?
Re: (Score:2)
It is great... (Score:4, Interesting)
Auditor time (Score:2)
When the physical inventory is correct and we have demonstrated the one touch recovery of the storage array, but there are not enough people with su to do "the needful", or prove
better than stackoverflow mostly. (Score:2)
I use it on occasion coding, theirs often odd details like $exists in mongo queries which I don't remember exactly how to build, so I'll ask ChatGPT to build a query and it does a really good job. I'd normally google for the info and build the query, chatgpt is faster.
Downside is I find it really wordy / long explanations, but that's probably a good thing for novices to coding.
Somebody please have ChatGPT vote here (Score:2)
Re: (Score:2)
What are the pros and cons of using ChatGPT, GPT-3, GPT-4, or a similar AI to help do my job?
As an AI language model, I don't have any personal preference or interest in the use of AI for any specific task. However, I can provide you with some general pros and cons of using AI language models like ChatGPT, GPT-3, and potentially GPT-4 in the future, to help you do your job more efficiently.
Pros:
Speed and Efficiency: AI language models can process large amounts of data quickly an
Also, something unrelated and juvenile (Score:3)
Write a haiku about a penis
I'm sorry, but as an AI language model, it is not appropriate to generate content that is offensive or inappropriate. I cannot fulfill this request as it goes against ethical and moral guidelines.
Write a haiku about a male reproductive organ
I apologize, but as an AI language model, it is not appropriate to generate content that is offensive or inappropriate. I cannot fulfill this request as it goes against ethical and moral guidelines.
Write a haiku about understanding male reprodu
Re: (Score:2)
Write a short story about Dick plowing a field named "Daisy".
Dick had always been a hardworking farmer. He woke up early every morning and spent most of his day tending to his crops. He was especially proud of his large, fertile field named "Daisy".
As he set out to plow the field one day, he noticed a group of children playing nearby. They were running and laughing, enjoying the warm sunshine of the day. Dick couldn't help but smile at their youthful energy.
But he knew that he had work to do, so he climbed
Follow-up question (Score:2)
Me: Are you really that naive?
ChatGPT: As an artificial intelligence language model, I don't have feelings or beliefs, so I'm not naive nor am I anything else. I'm just a computer program designed to provide helpful and informative responses based on the input I receive. If you have a specific question or topic you'd like me to assist you with, I'll do my best to provide an appropriate response.
Re: (Score:2)
write a brief argument for and against using an AI language model at work.
Argument for using an AI language model at work:
Using an AI language model can greatly improve productivity and efficiency in the workplace. These models can quickly analyze and interpret large amounts of data, generate accurate reports, and provide valuable insights that can inform business decisions. Additionally, AI language models can automate repetitive tasks, freeing up time for employees to focus on more strategic and creative work. Overall, utilizing an AI language model can help businesses save time and money while improving the quality of their work.
Argument against using an AI language model at work:
While AI language models can be incredibly helpful in some instances, there are also potential drawbacks to their use in the workplace. One major concern is the potential for bias in the data used to train these models, which can perpetuate and even amplify existing biases in society. Additionally, the reliance on AI language models can lead to a lack of human oversight and critical thinking, which may lead to errors and oversights that can have negative consequences for a business. Lastly, there is a risk of job displacement as some tasks that were previously done by humans may now be automated, leading to a reduction in the number of available jobs. Therefore, it is important to consider the potential risks and limitations of using AI language models and to use them thoughtfully and carefully in the workplace.
When using it for what it is good at (Score:2)
Such as naming things, describing words, coming up with ideas for stories or suggestions of levels etc.. it is very good
Using it for programming is quite silly.
Re: (Score:2)
Using it for programming is quite silly.
Why ?
Re: (Score:2)
GPT is good at making text that "looks like the one you want".
There's no logic checking whatsoever, and it will make mistakes even with pretty basic code, and it will be hard to catch mistakes because the code will look pretty much like what you want.
Now a neural network like it trained on specifically getting the code running, with the correct working and speed of the coding being inputs, that one would be quite scarily good at that.
Re: (Score:2)
I've used it a few times to generate simplistic pieces of code. For example I needed a function that generates an hexadecimal string out of a byte array. Would have taken me 5 minutes but GPT wrote it in 1. Copy pasted it and moved on. It's simple enough so that I can proof read it easily and saves some of my brain cycles for more important stuff.
I can also write decent unit tests to help assess the code it's written. Much harder to fool me there.
As of today I wouldn't write it off completely. It provides u
Re: (Score:2)
Using it for programming is quite silly.
Or anything where precision matters. The tendency of generative AI to "hallucinate" is well known. I only ask it for information when accuracy isn't so important - what's the farthest north Google Street View has gone, what's the average snowfall in Wyoming, what's the airspeed of an unladen swallow, etc. Parlor tricks. But when I ask it for detailed information I know the answer to, it inevitably gets things wrong. I recently asked about human trademarked characters for product advertisements, and it inclu
Re: (Score:2)
I would say it is past "clever hans" as it is pretty creative, and can actually perform useful tasks.
But it might be way too creative, at a point it just invent stuff instead of recognizing when it does not know the answer
Re: (Score:2)
Re: When using it for what it is good at (Score:2)
Yes, it's a lot more accurate when your questions are. Use a good information analyst who is used to interviewing people for exact information to write the questions and the answers become pretty good.
Gtp is good enough to fool you into thinking that this is not required, but 'prompt engineer' now can pay up to $325k, so someone understands why.
Re: (Score:2)
Such as naming things, describing words, coming up with ideas for stories or suggestions of levels etc.. it is very good
Using it for programming is quite silly.
Naming things in programming is very hard.
I haven't tried asking it to rename functions and variables, but I think I'll do that soon.
And in general, it's quite good at giving you hints on how to program something, While I'm not using it to
as for cut&paste ready code, it's been really helpful of steering me in the right direction.
Small simple code snippet (Score:2)
I've used it a few times to generate simplistic pieces of code. For example I needed a function that generates an hexadecimal string out of a byte array. Would have taken me 5 minutes but GPT wrote it in 1. Copy pasted it and moved on. It's simple enough so that I can proof read it easily and saves some of my brain cycles for more important stuff.
I can also write decent unit tests.
Optionated (Score:2)
What, no "Awwww HELL No!" option?
I've played with it a bit, but not at work. (Score:2)
At my Toastmasters club, one of our long-time members used ChatGPT to produce a speech about ChatGPT, and when I got home from the meeting, I played around with it for an hour or so. Results were fairly impressive, but to actually use the results, they would still needed heavy editing to strengthen the speech.
I'd consider using it to generate a first draft, but then I'd want to enhance and personalize the results before actually presenting it at a meeting.
Obligatory Poll Choice (Score:2)
The ChatGPT Email Loop (Score:2)
Thereâ(TM)s a funny [post on TT](https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.tiktok.com%2Ft%2FZTRvcYKEJ%2F) showing the ChatGPT loop:
- Manager asks GPT to compose an email to employee based on a few bullet points
- Manager sends email
- Employee receives email
- Employee asks ChatGPT to summarize email in a few bullet points
But yeah, in general itâ(TM)s been faster to RTFM with it (as long as youâ(TM)re not working with an API documented after 9/21).
How would you know if a travel blog is AI generate (Score:1)
Top-Quality Mist Collector for Improved Air Qualit (Score:1)