I've had the opposite experience - I've never really had an issue getting good looking DVD encodes assuming I start with a high quality master.
In general I've found you'll get the best results if you apply a light noise reduction pass (with something like Neat Video) to your master, even if most of the footage isn't visibly noisy - this cleans up noise that isn't necessarily visible, but which the encoder has to deal with, and also smooths out fine details that won't be visible in the final encode anyway due to the SD resolution.
My master file is usually a ProRes HD file, 16x9, and I encode it using Compressor. I use the "Highest Quality" option (two pass VBR best) with a peak data rate of 8Mbit/second. Progressive encode, 23.976 frame rate, anamorphic aspect ratio.
That produces files that look good when played back on a consumer DVD player at typical home screen sizes (32-60"). They're not nearly as detailed as HD, but that's to be expected when you're only retaining 1/5th of the pixels of the original.
With those settings when things go bad it's almost always a result of the source material. If the source has a lot of noise it really pushes the limits of the encoder, and you'll start seeing blockiness in the image. If you have highly saturated reds or blues they'll fall apart and look very blocky because of the 4:2:0 encoding. Shaky handheld footage also strains the encoder, and if it's combined with a lot of noise or high detail (foliage, etc) it's going to turn to mush. Interlaced footage doesn't encode as well as progressive, and if you shot something like 24-in-60i and didn't apply a pulldown before you started editing the final result won't be as good as it could be.
It's the standard garbage in/garbage out scenario; the better your source, the better your final results, so what you get in the end has as much to do with where you start as it does with how you do the final encode.