This still doesn't make sense. You cannot reliably determine the best 'faked images', and there's a reason for that - the only way we can know a well-faked image is fake is if there are things that *semantically* don't make sense in an image - e.g. say it's supposedly a photo of a ball, but the lighting doesn't make sense for a ball as there's a shading anomaly on the surface of the ball - the human can say, OK, that's supposed to be round but it's not, that must be fake - but if the AI 'fake detector' doesn't know that's supposed to be a ball, and if the lighting is correct for the shading anomaly if it were a dimple on the ball, then the "fake" IS NOT ACTUALLY A FAKE - it's just a photo of a ball with a dimple, with correct lighting. (If the lighting is *inherently wrong*, e.g. we can see the scene is lit from the left but there's a shadow falling the wrong way - then sure, we can tell something is fake - but a well-done fake incorporates correct applied scene lighting.)
Now image this fake is not a ball, but the super-imposition of one face, over another face - e.g. we take a photo of an actor doing something, then super-impose Trump's face over it - AS LONG AS WE GET THE LIGHTING RIGHT for Trump's face, then there is no way to tell the image is "fake" (unless we know via some other means, that Trump was not in that location at that time) - a pixel is a pixel is a pixel, whether that pixel came from Photoshop or not.
Of course there are other little things like image grain, but that's trivial to apply, as long as we look at the granularity profile of the original we can simulate granularity on the faked parts (I have done this many times).
Your generator is creating 100% fake images, so a *true* reliable detector would detect 100% of the images generated by that generator.