Read and understand carefully (not to get mixed up the same meaning)
1. When we feed an image of dim 256x256 to the PatchGAN architecture, we get final output as 30x30. This output size is talking about for an whole image (not patchwise) whereas;
2. output of 1x1 is talking about for an 70x70 patch from whole image. (And I already explained how each pixel of output got 70x70)
I hope you got it. Now, Rest of things you already know why PatchGAN was implemented. It is to fulfill the Discriminator's Objective.
(So, as I said before if you aware of GAN concept, you definitely got it:)
Cheers! :D
Ask anytime if still not satisfy with this answer.