![doom vfr gtx 960 doom vfr gtx 960](https://scufgaming.com/media/wysiwyg/Content/Doom_Eternal/scuf_social_doom-eternal_layout_vantage2_1920x1080v1.jpg)
#Doom vfr gtx 960 1080p#
On something like 10Mbps 1080p H264 baseline clip used by WinSAT, again it mainly saved CPU time: Testing a 100Mbps clip, the biggest difference was lower CPU usage:ĭXVAChecker 圆4 - Playback Benchmark (scaled to 1280x720)
![doom vfr gtx 960 doom vfr gtx 960](https://i.ytimg.com/vi/0mIyoQnnmP4/maxresdefault.jpg)
DOOM Eternal, and Red Dead Redemption 2 GPU & Displays. You can join the discussion on Nvidias Geforce 496.76 Game Ready Driver on the OC3D Forums. The ASIC decoder on this GTX 770 seems too slow to bottleneck copyback on high-bitrate 4K H264. GeForce GTX 980 Ti, GeForce GTX 980, GeForce GTX 970, GeForce GTX 960, GeForce GTX 950. What sort of decoding performance discrepancy were you seeing between 'normal dxva2 copyback' and 'dxva native' on your GTX 960 prior to these changes? If copyback-direct greatly improves performance using the modern ASIC decoder on your GTX 960, that's all that really matters to me. I figured as much, which is why I've always seen the hybrid decoder as nothing more than a crutch. The problem is most GPU can't do P010/P016->RGB conversion under D3D9EX,so if decoder output P010/P016 surface,then can't convert it to D3D compatible format with GPU.ĭ3D11.1 can read YUV420 format directly(mapping to 2 SRV one for Y and the other for UV)。so it no need to do the YUV->RGB conversion in DXVA pass。but it need a DX11 Decoder(Decoder use DX11 device instead of D3D9Ex)。D3D9Ex Device can't share YUV420 DXVA Surface to DXGI_FORMAT_NV12 or DXGI_FORMAT_P010/DXGI_FORMAT_P016 Therefore if you want avoid cpu copy-back,you need convert YUV 420 to RGB(StretchRect or VideoProcessBlt)in DXVA pass, in render pass,convert it back to YUV420,apply some upsample and color-space mapping algorithm,or use the RGB surface directly. GPU can't handle any YUV420 format(NV12 P010 P016 etc) directly before D3D11.1. So, it seems that DXVA2 Native and 10bit HEVC decoding are incompatible.Ĭould Intel or AMD do any tricks with the driver, in order for DXVA2 Native and 10bit to be compatible ?