Hi, as the title says, is it possible? I'm asking this because I have many SD interlaced videos that I want to upscale/enhance before feeding to 1080p display. I heard doing it vice-versa is not recommended and causes problems. Thank you for listening, Riado
--------------------- "If your fenders aren't rolled, you have bad wheel fitment."
Correct: Resizing (scaling) an interlaced video will destroy the video. So if you want to apply such filters, you must deinterlace first! Anyway, you can't use "hardware" deinterlacing -and- apply ffdshow filters afterwards: Using hardware deinterlacing means nothing but: Keep the video as-is (interlaced), mark it as being "interlaced" and send it to the video renderer. Then let the video renderer take care of the interlacing, which usually means that the GPU ("hardware") will perform the deinterlacing. Of course ffdshow cannot apply any filters to the deinterlaced video, when using this method! That's because when using "hardware" deinterlacing then ffdshow needs to pass the video to the renderer before the deinterlacing takes place! If any interlacing-incompatible filters need to be applied, then ffdshow needs to deinterlace the video itself, in software.
Thank you lord, very clean explanation! I will try this method: s/w deinterlace first, then enhance and upscale to 720p (instead of 1080p so I don't overload the cpu).
Wouldn't it be possible with Nvidia cards to use a CUDA based MPEG-2 decoder and then use ffdshow's raw video processing after that? Of couse someone would need to come up with a CUDA enabled MPEG-2 decoder.
A bit of a bump, but I see no sense in creating a new topic for this. Just want to clarify, if I enable YADIF deinterlacing in ffdshow as well as Resize, does it automatically prioritise the deinterlacing and deinterlace the video before resizing (that seems like the most sensible approach), or do I have to load a YADIF AVISynth filter first?