Comment Similar findings (Score 1) 112
I came to a similar conclusion about a year ago. I have an app that, among other things, lists news headlines for local communities. Some news sources provide a short summary of the article as well, but many do not. If no summary is provided then I'm relegated to using the first sentence or so from an article.
I'd hoped to use AI to generate that summary when given the body of the article, but no matter how I prompted it would fabricate "facts" into the summary far too often for me to actually feel comfortable using it. 90% of the time it was great, but the utter failure the other 10% of the time made it unusable.
I think the way this is addressed now is just throwing more processing at it (and more energy expenditure) by having an additional arbiter AI role that checks the output to see if it is factual.