There are three problems when dealing with legacy code.
1. Figuring out what the code does.
2. Figuring out what the code was supposed to do.
3. Figuring out what the code actually should be doing.
The three are often not the same. The code lies. The comments lie. The commit messages lie. The documentation lies. The managers lie. The users lie.
By lie, I mean, what they tell you, regardless of what they believe to be the truth, is not reality.
For example:
Someone took a stab at writing some code in a modular fashion, or someone before you refactored it. There's a function - it says getXYZ, and it returns a value. Great! Then you dig deeper and discover that getXYZ sets several flags which are then used by the calls that come after getXYZ in the block you are looking at. You discover this only after shit starts breaking because you reordered several function calls during refactoring, none of which had the singular result of getXYZ as a dependency.
An even more straightforward example of that would be discovering a bunch of shit broke when you looked at and found that nobody used the result of getXYZ, and refactored out what looked like dead code. Again, because getXYZ, despite the pattern, actually had side effects.
At this point, now you have a problem. Is getXYZ actually supped to return a result that someone is supposed to use? Was that its original utility, and someone just jammed shit into it because it was faster than refactoring it into something else? Or was it even worse, and this was an incomplete refactor?
Nobody knows! Nobody can tell you! The commit history doesn't go back that far, and even if it did, nobody actually leaves coherent, useful commit messages!
And don't get me started on documentation and comments. Sometimes they can tell you how the system was supposed to behave at one point... but that's not how the system behaves now, and it isn't how all the users and managers believe the system is supposed to work because they've been using the current system for so long.
"Fixing" the code to follow what was supposed to be the correct design can cause all sorts of problems with downstream processes that rely on the current broken behavior. I'm going to steal Uncle Bob's example of finally fixing a typo in a dropdown menu and causing a bunch of UI macro code that looked for that typo to fail...
Often times modernization means essentially re-negotiating all the contracts and interfaces and process workflow with all the stakeholders to come up with a common understanding of what the code should be doing. That's like the best case scenario.
The worst case scenario is they say - use the old code for requirements, make it work exactly like that. Well, if the old code is shitty and illogical, and you need the new code to interface 1:1 with everything that plugged into that... well, guess what? You're going to get an architecture that is going to replicate shitty and illogical 1:1. The actual code might be great, but the process will be just as hard to understand, and probably eventually just as head scratchingly difficult to modify and maintain.
I wish our robot overlords the best of luck with this problem.