Do to changes like this, I foresee universities more loudly advertising that their CS programs are accredited because I'm pretty damn sure that using GPT to create a program will not be worthy of a CS degree in most peoples' eyes.
Hmmm...I am going to assume you are really talking about software engineering, and not computer science. They are related, but the article is about changes in the software engineering curricula at UW, and not so much the CS side of the house. Here's a direct quote from the article:
“We have never graduated coders. We have always graduated software engineers.”
With that said, I actually have a CS degree from the University of Arizona, but I spent thirty-odd years as a sysadmin, riding herd on software engineers whose default position was to reject anything that moved them out of their comfort zone. You're right that accreditation will matter more than ever, but accreditation bodies don't exist to preserve the past; they exist to ensure that graduates are prepared for the professional demands of the present and future, and UW's direction is clear. Here's another quote from the artcile:
"Coding, or the translation of a precise design into software instructions, is dead. AI can do that."
So let's be very clear about this -- academia doesn't create coders. It creates software engineers -- people who can use code to solve complex problems. LLMs shoulder some of the burden. Not all of it, but enough that the future holds exactly two paths for software engineers -- the path where people leverage LLMs, and the one where they don't. Guess which path defines a successful career in software engineering.
UW's goal is not to teach students how to prompt GPT to spit out a finished program. The goal is to focus on the actual work of software engineering: the "creative and conceptually challenging work" of figuring out precisely what the computer needs to do.
Think of the evolution of software engineering tools.
In the 1960s and 1970s a "real" programmer might have said that anyone using a compiler like FORTRAN or COBOL instead of writing assembly code wasn't doing "real" programming. In the 1990s, a "real" programmer might have said that anyone using an IDE with syntax highlighting and code completion instead of vi and make was taking a shortcut. Today, you're suggesting that using an AI assistant to handle boilerplate code, debug a tricky API call, or translate a Python algorithm into Rust is somehow not worthy.
In every era, the tool—whether compiler, text editor, or IDE—abstracted away tedium and repetition to free the engineer to engage at a higher level of complexity. GPT and other LLMs aren’t cheat codes; they’re the next rung on that ladder. They are the compiler's compiler. LLMs aren't replacing thought; they’re upgrading the thinker. Was Michelangelo less of an artist because he used a scaffold to reach the ceiling of the Sistine Chapel—instead of a brush with a really long handle?
So, let's talk about accreditation. In a few years, which program do you think ABET will accredit?
1. The one that ignores industry-standard tools and produces graduates who are experts in solving problems that no longer exist?
2. The one that teaches students how to leverage AI assistants to build more complex, robust, and innovative systems faster than before, while ensuring they have the deep fundamental knowledge to know when the AI is wrong?
Institutions that don't teach their students how to collaborate with AI will be the ones that lose their credibility. They'll be the new ITT Tech or University of Phoenix -- diploma mills churning out graduates unprepared for the modern workplace. The accredited, top-tier universities will be the ones, like UW, that see LLMs and AI in general for what it is -- a new tool -- and prepare their students to embrace it.